Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career Our Data & Analytics group is responsible for working with various business owners/stakeholders from Sales, Marketing, People, GCS, Infosec, Operations, and Finance to solve complex business problems which will have a direct impact on the metrics defined to showcase the progress of Palo Alto Networks. We leverage the latest technologies from the Cloud & Big Data ecosystem to improve business outcomes and create through prototyping, Proof-of-Concept projects and application development. We are looking for Data Platform Engineer with extensive experience in data engineering, cloud infrastructure, and a strong background in DevOps, SRE, or system engineering. The ideal candidate will be responsible for designing, implementing, and maintaining the scalable, reliable platforms and data transformations that support our business objectives. This role requires a deep understanding of both data engineering principles and platform automation, as well as the ability to collaborate with cross-functional teams to deliver high-quality data solutions Your Impact Design, develop, and maintain data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse or data lake environment. Automate, manage, and scale the underlying infrastructure for our data platforms (e.g., Airflow, Spark clusters), applying SRE and DevOps best practices for performance, reliability, and observability. Collaborate with stakeholders to gather requirements and translate business needs into robust technical and platform solutions. Optimize and tune existing data pipelines and infrastructure for performance, cost, and scalability. Implement and enforce data quality and governance processes to ensure data accuracy, consistency, and compliance with regulatory standards. Work closely with the BI team to design and develop dashboards, reports, and analytical tools that provide actionable insights to stakeholders. Mentor junior members of the team and provide guidance on best practices for data engineering, platform development, and DevOps. (Nice-to-have) Aptitude for proactively identifying and implementing GenAI-driven solutions to achieve measurable improvements in the reliability and performance of data pipelines or to optimize key processes like data quality validation and root cause analysis for data issues. Qualifications Your Experience Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience in data engineering, platform engineering, or a similar role, with a strong focus on building and maintaining data pipelines and the underlying infrastructure. Must have proven experience in a DevOps, SRE, or System Engineering role, with hands-on expertise in infrastructure as code (e.g., Terraform, Ansible), CI/CD pipelines, and monitoring/observability tools. Expertise in SQL programming and database management systems (e.g., BigQuery). Hands-on experience with ETL tools and technologies (e.g., Apache Spark, Apache Airflow). Experience with cloud platforms such as Google Cloud Platform (GCP), and experience with relevant services (e.g., GCP Dataflow, GCP DataProc, BigQuery, Cloud Composer, GKE). Experience with Big Data tools like Spark, Kafka, etc. Experience with object-oriented/object function scripting languages: Python/Scala, etc. (Nice-to-have) Demonstrated readiness to leverage GenAI tools to enhance efficiency within the typical stages of the data engineering lifecycle, for example by generating complex SQL queries, creating initial Python/Spark script structures, or auto-generating pipeline documentation. (Plus) Experience with BI tools and visualization platforms (e.g., Tableau). (Plus) Experience with SAP HANA, SAP BW, SAP ECC, or other SAP modules. Strong analytical and problem-solving skills, with the ability to analyze complex data sets and derive actionable insights. Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Additional Information The Team Working at a high-tech cybersecurity company within Information Technology is a once-in-a-lifetime opportunity. You’ll join the brightest minds in technology, creating, building, and supporting tools and enabling our global teams on the front line of defense against cyberattacks. We’re connected by one mission but driven by the impact of that mission and what it means to protect our way of life in the digital age. Join a dynamic and fast-paced team of people who feel excited by the prospect of a challenge and feel a thrill at resolving technical gaps that inhibit productivity. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.
Posted 1 week ago
10.0 - 12.0 years
0 Lacs
Karnataka, India
On-site
What You Will Work On You’ll be working on building products and solutions for managing and governing Nike’s multi cloud ecosystem , you will be one of the primary technical leaders on one of Nike’s emerging technology platforms. In a typical week, half of your time will be spent leading by example to design, develop, operate, and integrate Nike and partner platforms and applications. The other half will be leadership responsibilities to drive organizational technical priorities, define engineering best practices, collaborate with peers, and help to translate business problems into technical solutions. Who Are We Looking For We are looking for a Engineering Manager as part of our Cloud Infrastructure Engineering team at Nike. The ideal candidate will be a servant leader, bringing deep technical expertise in Cloud Engineering to solve complex engineering problems to enable Nike’s pursuit to deliver state of the art Enterprise Cloud infrastructure to run Nike’s business and support our athletes. You'll have - Bachelor's degree in Computer Science, Engineering, Information Systems, or similar field or relevant professional experience, education, and training. Minimum 10 to 12 years of experience in software engineering, with a focus on cloud infrastructure engineering in an multi cloud ecosystem including architecture, deployment and management of enterprise cloud solutions. Extensive hands on experience with major cloud platforms such as AWS, Azure, or Google Cloud (AWS is preferred ) Hands on experience in cloud infrastructure automation using tools like Kubernetes, Terraform, Ansible, or CloudFormation Experience in designing, implementing, and managing control plane architectures for cloud environments, ensuring efficient orchestration, security, and governance of cloud resources Strong hands-on programming skills in languages such as Python, GoLang, Java, JavaScript Hands-on experience implementing and supporting modern software architectural principles and patterns (REST, domain-driven design, Devops, microservices, etc.) Experience with front-end web application technologies is preferred (JavaScript, CSS, html5, React, other UI Frameworks etc.) Experience with implementing and integrating AI, Machine Learning, GenAi and related data solutions preferred. Experience working in a technical leadership role with agile teams in a product model Demonstrated ability to build and maintain relationships with multiple peers and cross-functional partners Strong leadership skills with the ability to mentor and guide a team of cloud engineers, and excellent collaboration skills to work effectively with cross-functional teams and stakeholders. Who You Will Work With You will be a part of the larger Global Technology organization working on Nike’s Cloud Infrastructure Engineering team and report to the team’s Software Engineering Director. You will spend much of your time with Software Engineers in Cloud Engineering and adjacent teams. You will also partner closely with other Principal Software Engineers, Solution Architects, Engineering Directors, and Product Managers in both Product Innovation and other departments..
Posted 1 week ago
8.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Job Title: DevOps Team Lead Location : Ahmedabad-Office Only Shift Time - 9hrs Shift Between 8AM -11PM IST Alternate Saturdays will be Off. Key Responsibilities: • Manage, mentor, and grow a team of DevOps engineers. • Oversee the deployment and maintenance of applications like: • Odoo (Python/PostgreSQL) • Magento (PHP/MySQL)/Node.js (JavaScript/TypeScript) /LAMP/LEMP stack apps • Design and manage CI/CD pipelines for each application using tools like Jenkins, GitHub Actions, GitLab CI. • Handle environment-specific configurations (staging, production, QA). • Containerize legacy and modern applications using Docker and deploy via Kubernetes (EKS/AKS/GKE) or Docker Swarm. • Implement and maintain Infrastructure as Code using Terraform, Ansible, or CloudFormation. • Monitor application health and infrastructure using Prometheus, Grafana, ELK, Datadog, or equivalent tools. • Ensure systems are secure, resilient, and compliant with industry standards. • Optimize cloud cost and infrastructure performance. • Collaborate with development, QA, and IT support teams for seamless delivery. • Troubleshoot performance, deployment, or scaling issues across tech stacks. Must-Have Skills: • 8+ years in DevOps/Cloud/System Engineering roles with real hands-on experience. • 2+ years managing or leading DevOps teams. • Experience supporting and deploying: Odoo on Ubuntu/Linux with PostgreSQ /Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB / -Node.js with PM2/Nginx or containerized setups • Experience with AWS / Azure / GCP infrastructure in production. • Strong scripting skills: Bash, Python, PHP CLI, or Node CLI. • Deep understanding of Linux system administration and networking fundamentals. • Experience with Git, SSH, reverse proxies (Nginx), and load balancers. • Good communication skill and having exposure in managing clients. Preferred Certifications (Highly Valued): • AWS Certified DevOps Engineer Professional • Azure DevOps Engineer Expert • Google Cloud Professional DevOps Engineer • Bonus: Magento Cloud DevOps or Odoo Deployment Experience Bonus Skills (Nice to Have): • Experience with multi-region failover, HA clusters, or RPO/RTO-based design. • Familiarity with MySQL/PostgreSQL optimization and Redis, RabbitMQ, or Celery. • Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower. • Knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: DevOps Intern / Entry-Level DevOps Engineer 📍 Location: Hyderabad (On-site) About Us: We are a growing startup at the forefront of IoT innovation, helping B2B clients transform physical operations through connected digital systems. From smart devices to data-driven dashboards, our solutions are designed to deliver measurable impact. Based in Hyderabad, we’re looking for curious and motivated individuals who want to shape the future of tech. Role Overview: We’re hiring DevOps Interns / Entry-Level DevOps Engineers to assist in maintaining and enhancing our infrastructure, CI/CD pipelines, and deployment processes. This is an ideal opportunity for recent graduates or early-career professionals eager to gain hands-on experience in DevOps while working alongside a passionate tech team. Key Responsibilities: Assist in managing and automating CI/CD pipelines using tools like GitLab CI, Jenkins, or GitHub Actions. Support containerization and orchestration efforts using Docker and Kubernetes. Help monitor system performance and logs using tools like Prometheus, Grafana, or ELK stack. Work with developers to streamline build and deployment workflows. Manage cloud infrastructure (AWS, GCP, or Azure) under guidance. Learn and apply infrastructure-as-code practices using Terraform, Ansible, or Helm. Participate in regular team stand-ups and sprint planning sessions. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (or pursuing final year). Basic understanding of Linux, networking, and system administration. Familiarity with version control systems like Git. Exposure to any scripting language (e.g., Bash, Shell). Eagerness to learn about CI/CD, containerization, and cloud services. Problem-solving mindset with strong attention to detail. Clear communication and collaboration skills. What You’ll Get: Practical experience with modern DevOps practices and tools. Mentorship from experienced DevOps and engineering professionals. Exposure to cloud platforms, automation, and monitoring systems. Opportunity for full-time employment based on performance. A dynamic, startup work environment that encourages experimentation and learning. How to Apply: Interested candidates should send their resume and a short introductory note to: 📧 career@oneiot.io Join us in building the future—one automated deployment at a time.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior Java Developer at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Senior Java Developer you should have experience with: Back-end development with Java/SpringBoot. Relational databases like SQL Server and Oracle, and NoSQL databases like MongoDB. Experience using observability tools such as logging and metrics for debugging (Elastic/Kibana). Application architecture REST/API design. Agile software development practices. Experience of CI/CD approaches and technologies. Some Other Highly Valued Skills May Include Continuous integration and DevOps using GitLab. Hands-on experience with Docker/K8s/OpenShift. Infrastructure as Code (Ansible, Terraform). Foundational working knowledge of Site Reliability Engineering (automation, observability, incident management, resilience, disaster recovery, high availability, documentation). Familiarity with AWS or Azure. Observability platforms: Prometheus/Grafana, Cisco AppDynamics, Datadog, Dynatrace, Splunk, others. Understanding of networking concepts, protocols, and troubleshooting techniques. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Company Greetings from Teamware Solutions a division of Quantum Leap Consulting Pvt. Ltd About the Role We are hiring an AWS with Java Experience: 7-12 Years Notice Period: Immediate- 15 days Positions: 10 Demands for Custom Applications AWS with Java 1 - Senior-level position Experience: 7 -12 years 4 - Senior Associate positions Experience: 7-10 years Open positions: 10 Work Mode : Hybrid Work Timings: 11:00 AM – 8:00 PM IST (Flexibility Required) Location Hybrid - Bangalore and Hyderabad 1.AWS with Java (Sr-level position): Lead end-to-end cloud migration initiatives, including discovery, assessment, planning, and execution of application and infrastructure migrations from on-prem or other platforms to AWS. Architect and deliver scalable, secure, and high-performing Java-based solutions using modern cloud-native designs (Spring Boot, REST APIs, microservices) on AWS. Manage and mentor a team of engineers, driving best practices in software development, cloud infrastructure, and DevOps automation. Conduct application discovery and cloud readiness assessments, identifying modernization opportunities and developing actionable migration roadmaps. Implement and enforce best practices in CI/CD, Infrastructure as Code (IaC), containerization (Docker, Kubernetes), cloud security, and cost optimization on AWS. Collaborate with cross-functional stakeholders (product, infra, security, and business teams) to ensure successful delivery of cloud transformation programs within budget and timeline. Must have Skills: Strong expertise in Java (Spring Boot, RESTful APIs, multithreading, microservices architecture). Hands-on experience with AWS, including services like Lambda, ECS, S3, IAM, RDS, CloudWatch, and CloudFormation/Terraform. Proven leadership and team management experience in a technical environment. Knowledge of DevOps tools (Git, Jenkins, Docker, Kubernetes, Ansible) and agile methodologies. Strong problem-solving skills and experience designing enterprise-grade solutions. 2: AWS with Java (SA): Contribute to cloud migration projects by performing application discovery, technical assessments, and implementation tasks for migrating workloads to AWS. Design and develop Java-based cloud-native applications, APIs, and microservices using Spring Boot and AWS-native services. Support cloud infrastructure build-outs, using Infrastructure as Code (IaC) tools like CloudFormation or Terraform, and assist in environment provisioning and configuration. Participate in modernization and migration planning, collaborating with architects, leads, and business stakeholders to align technical deliverables with migration strategies. Implement CI/CD pipelines, containerization, and automation to support secure, repeatable, and scalable deployment processes. Troubleshoot and optimize cloud applications, focusing on performance, cost-efficiency, and resilience using AWS-native monitoring and logging tools Must Have Skills: Strong hands-on expertise in Java (Spring Boot, multithreading, design patterns, API development). Extensive experience with AWS services, including EC2, S3, Lambda, RDS, DynamoDB, IAM, CloudFormation/Terraform, and migration tools (e.g., AWS Migration Hub, Application Discovery Service, DMS). Cloud migration expertise: Discovery, assessment, re-platforming, and re-architecting legacy applications to cloud-native solutions. Proficiency in DevOps toolchains: Git, Jenkins, Docker, Kubernetes, Ansible, and monitoring tools. Leadership and project delivery experience in agile environments, including managing cross-functional teams and communicating with executive stakeholders. ✅ AWS with Java – Senior-Level Position: Expert in Java (Spring Boot, Microservices), AWS (Lambda, ECS, S3, RDS), DevOps (Jenkins, Docker, Kubernetes, Terraform), cloud migration strategy, and team leadership. ✅ AWS with Java – Senior Associate: Strong in Java (Spring Boot, APIs), AWS (EC2, Lambda, RDS, DynamoDB), DevOps (Git, Jenkins, Docker, Kubernetes), cloud migration execution, and Infrastructure as Code. Please let me know if you are interested in this position and send me your resume to netra.s@twsol.com
Posted 1 week ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
HCL Tech Is Hiring SDWAN Network Architect for Chennai, Bengaluru, Noida, Pune & Hyderabad locations (Notice Period with 0 - 30 days) Required Skill: Any Combination of SDWAN - MERAKI, VIPTELA, SILVERPEAK, FORTINET Years of Experience - 14 - 24 yrs Job Description: Candidate with foundation of deep technical hands on expertise in network technologies. He/She should possess a high-level knowledge across compute, storage, and security disciplines supplemented with automation experience to have a “big picture” vision. Minimum 14 years + of Operations and transformations experience. Although it’s a transformational role but Candidate should attain Operational knowledge to provide the support for escalation of network incidents during critical network emergencies. Required Skills: · Must have ‘CCIE Routing and Switching’, CCIE Enterprise, Cisco Devnet certification or, VMWare NSX, Versa, Fortinet certified is preferred. · Expertise in one or more OEM product lines in enterprise areas: o SDWAN – Cisco Viptela, Meraki, FortiGate, Silver peak, Velo cloud, Versa, Azure vWAN, AWS Cloud WAN o SSE: Zscaler, Palo Alto, Cisco Umbrella, Fortinet. o Cloud Interconnects – Mega port, Equinix or Aviatrix o Software defined access/Traditional LAN – Cisco, FortiGate, HPE · Possess programming competence, with technical skills in some variant of each of the following: o Automation Frameworks (Ansible, Postman, Terraform) o Coding: Python, JSON, YAML · Recent experience leading a project delivering some aspect of segmentation on a large-scale environment spanning, wide area networks, SASE and Cloud Connectivity. · Have a minimum of 14 years’ experience designing, developing, configuring, and implementing large scale global enterprise networks with diverse solutions from multiple vendors. · Demonstrated experience leading a project delivering some aspect of network services automation · Experience to prepare Bill of Material, Migration approach, ramp up plan and effort estimation (transformation and transition) based on the inputs received as part of RFPs. · Have excellent communication skills with a proven track record of presenting at the most senior levels within the enterprise and the network industry.
Posted 1 week ago
0 years
0 Lacs
India
Remote
Agent-Based Segmentation Expertise: Experience with agent-based segmentation solutions, especially Cisco Secure Workload (CSW). Alternative Tool Experience: If CSW experience is rare, strong background in similar tools like Illumio or Akamai Guardicore is acceptable. Architect/SME Level: Ability to act as an architect and subject matter expert, not just a hands-on engineer. Hands-On Implementation: Practical, hands-on experience with micro-segmentation projects, ideally having led or significantly contributed to such deployments. Stakeholder Communication: Strong skills in communicating technical concepts to internal teams and stakeholders, including managing concerns and leading them through the segmentation journey. Pragmatic Approach: Ability to deliver practical, risk-reducing segmentation rather than aiming for exhaustive segmentation, with a focus on what is achievable and valuable. Documentation: Capable of producing high-quality, auditable documentation for regulatory and external review. Standardization and Simplification: Preference for candidates who can deliver repeatable, standardized solutions rather than complex, one-off configurations. Deployment Scale: Experience with deployments of varying sizes (hundreds to thousands of workloads) is valued. Programming / Scripting / Network Automation – Further to an SME skillset, it’s expected that you will bring some level of programming, scripting or automation experience. Examples of toolset experience expected here includes Python, CI/CD Pipelines, Terraform, Ansible, PowerShell, etc.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Navi Mumbai, Maharashtra
On-site
Performance AssuranceNavi Mumbai Posted On 06 Aug 2025 End Date 05 Oct 2025 Required Experience 3 - 6 Years Basic Section No. Of Openings 1 Designation Senior Test Engineer Closing Date 05 Oct 2025 Organisational MainBU Reliability Engineering Sub BU Performance Assurance Country India Region India 1 State Maharashtra City Navi Mumbai Working Location Mahape Client Location NA Skills Skill APPDYNAMICS Highest Education No data available CERTIFICATION No data available Working Language No data available JOB DESCRIPTION Firsthand experience implementing or deploying AppDynamics solution into applications in production environment. • Hands-on experience in AppDynamics (Java, .net agent, EUM, BIQ, Server & Network), Business Transaction Configuration, Dashboard Configuration, Incident/Alert Configuration, Task Scheduling, Plugin Configuration Strong understanding of application platforms, including network, database, runtime, application, and user interface. Excellent communication, collaboration, and conflict resolution skills with the ability to adapt to various business needs. Knowledge of ansible will be the advantage.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0125-0381 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Lead Analyst - Python Developer with SQL Position: Lead Analyst - Python Developer with SQL Experience: 5 to 10 Years Category: Software Development/Engineering Job Location: Bangalore Shift Timings: 9:00 AM to 6:00 PM Mode of Work: Hybrid (3 Days WFO) Position ID: J0125-0381 Your future duties and responsibilities: Must have skills/tool knowledge: SQL Python Rest Api Ansible playbook Linux XML JSON CI/CD (Jenkins / Azure DevOps) GIT/ TFS / Azure DevOps (Code repository) Ticketing systems (Jira, ServiceNow etc.) Good to have: IIS PowerShell Windows server maintenance Required qualifications to be successful in this role: Education Required: Bachelor's Degree in a relevant discipline or similar life skills and acquired experience. Skills: Java JAXB Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement. Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks. Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting. Familiarity with SQL for validating database changes, debugging issues, and running schema checks. Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling. Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium. Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups. Strong exposure to API testing, load/performance testing, and reliability validation. Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog). Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices. Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates. Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Team Lead DevOps to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement. Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks. Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting. Familiarity with SQL for validating database changes, debugging issues, and running schema checks. Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling. Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium. Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups. Strong exposure to API testing, load/performance testing, and reliability validation. Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog). Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices. Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates. Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a skilled and proactive DevOps Engineer with hands-on experience in modern infrastructure tools and cloud-native technologies. The ideal candidate will have a strong foundation in Terraform, Kubernetes (including EKS), containerization, and CI/CD systems, and will play a key role in automating infrastructure provisioning, securing communication layers, and maintaining application uptime in production environments. Key Responsibilities Design, develop, and manage infrastructure as code using Terraform. Containerize and deploy applications using Docker and Amazon EKS with Persistent Volumes. Configure and maintain secure communication across services using TLS/mTLS. Manage application deployments using Argo CD, ensuring reliable GitOps-based workflows. Implement and troubleshoot Kubernetes concepts, including : Pods, Services, Deployments Taints and Tolerations Node Affinities, Resource Requests/Limits Proactively monitor and troubleshoot application performance and infrastructure issues. Collaborate with development teams to define and implement CI/CD pipelines and release automation. Work with configuration management tools, comparing Ansible vs Terraform for use-case efficiency. Assist in basic database administration tasks for MySQL and MongoDB, including connectivity, configuration, and availability Skills : Strong proficiency in Terraform (infrastructure as code, modules, state management). Experience with Docker and Amazon EKS (Elastic Kubernetes Service). Understanding and implementation of Persistent Volumes in Kubernetes. Deep understanding of TLS/mTLS for secure service-to-service communication. Hands-on experience with Argo CD for GitOps-based application deployments. Strong grasp of Kubernetes architecture, troubleshooting, and advanced concepts like taints, tolerations, and node management. Familiarity with Ansible and ability to differentiate its use cases from Terraform. Basic operational knowledge of MySQL and MongoDB. Strong scripting skills (Bash, Python, or similar) for automation and Qualifications : Experience with other cloud platforms (e.g., GCP, Azure) in addition to AWS. Knowledge of service mesh technologies (Istio, Linkerd) is a plus. Monitoring and observability experience using tools like Prometheus, Grafana, ELK/EFK stack. Relevant certifications (CKA, AWS DevOps Engineer, Terraform Associate) are a plus (ref:hirist.tech)
Posted 1 week ago
7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Principal Engineer or Subject Matter Expert (SME) Multi-Cloud & Digital Transformation Location : Mumbai. Min Experience : 8+. Description As a Team Lead for 15+ SREs and a Multi-Cloud SME in our CTO function, you will shape the customer's cloud strategy and ensure seamless integration across Azure, OCI, and AWS environments. This requires expertise in enterprise-level cloud transformations, delivering scalable, secure, and cost-efficient infrastructure solutions, along with a strong background in infrastructure design, cloud migration, and modernizing legacy systems to cloud-native architectures. Key Responsibilities Cloud Strategy & Modernization : Lead comprehensive cloud transformations, guiding migrations from monolithic to microservices architectures. Drive DevOps transformation, including Kubernetes migration, data transfer optimization, advanced infrastructure automation, and CI/CD pipelines. Oversee SRE process implementation for operational excellence, observability, developer training, and cloud security foundation. Advise on efficient resource utilization, enhanced scalability, improved fault tolerance, and streamlined Kubernetes deployments. Cloud Governance & Security Postures Develop and implement DevSecOps Pipelines, ensuring multi-cloud governance. Manage WAF optimization, Security Operations & SIEM, plus VAPT & Fixes. Ensure compliance with disaster recovery and business continuity plans. Multi-Cloud Architecture & Operations (AWS, Azure, OCI) Provide expert guidance on Network Provisioning across Azure, AWS, OCI. Oversee Compute Services (Azure VMs, AWS EC2, OCI Compute Instances) and Storage solutions. Drive Infrastructure as Code (IaC) initiatives using Terraform, ARM Templates, and Ansible. Implement and manage CI/CD pipelines for agile DevOps, utilizing Jenkins, ArgoCD, Ansible, and GitLab. Cost Optimization: Focus on "low hanging fruits" to optimize cloud costs, right-sizing EC2, Cache, DMS, DB. Required Skills & Experience 7- 8 years of rich experience as an Infrastructure Solution Architect or similar senior role, specializing in designing and deploying multi-cloud solutions using Azure, AWS, and Oracle Cloud Infrastructure (OCI). Hands-on experience in Infrastructure as Code (IaC) using Terraform. Proficient in designing and managing using Kubernetes. Extensive experience with DevOps tools such as Jenkins, ArgoCD, Ansible, and monitoring tools like Datadog. Excellent collaboration and communication : Certified Kubernetes Administrator (CKA) or Equivalent. If you are a visionary Multi-Cloud SME ready to lead complex digital transformations in the financial sector, we encourage you to apply. (ref:hirist.tech)
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We are looking for an enthusiastic and foundational DevOps Engineer to join our dynamic technical team. This role is perfect for someone early in their DevOps career who is eager to learn and contribute to building a modern, automated infrastructure. You will work closely with senior engineers to support our cloud environments, enhance our CI/CD processes, and help implement best practices in a collaborative setting. This is a hands-on role where you will gain valuable experience with cutting-edge tools and technologies. Core Responsibilities Infrastructure and Configuration Support : Assist in building and maintaining cloud infrastructure using Terraform. Help maintain and execute Ansible playbooks for server configuration and application deployment. CI/CD Pipeline Support Help maintain and improve CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. Use version control (Git) effectively for branching, merging, and maintaining the codebase. Containerization Work with Docker to containerize applications and troubleshoot container-related issues. Assist in deploying and managing applications on container platforms like Kubernetes or AWS ECS. Monitoring And Troubleshooting Use monitoring tools (Prometheus, Grafana, AWS CloudWatch) to track system health and respond to alerts. Act as a point of support for troubleshooting application, OS, and network-level issues. Security And Compliance Assist in deploying and managing security agents (e.g., Wazuh). Help enforce security best practices and access policies within the : 2-3 years of professional experience in a DevOps, Cloud, or hands-on System Administration role with a focus on automation. Bachelor's Degree in IT, Computer Engineering, or a related field (MCA, MSc IT, etc.). Foundational Technical Skills Cloud Platforms : Solid understanding of core cloud services on AWS or GCP (e.g., EC2, S3, IAM, RDS). CI/CD : Experience working with CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Containerization : Hands-on experience with Docker is required. Familiarity with container orchestration concepts (Kubernetes, ECS) is a major plus. Scripting : Good knowledge of a scripting language, preferably Bash or Python. Version Control : Proficiency in using Git for daily development workflows. Infrastructure/Configuration : Familiarity with the concepts of Infrastructure as Code (Terraform) and Configuration Management (Ansible) is highly desirable. Added Advantages Exposure to security tools like Wazuh or Open Policy Agent (OPA). Experience using monitoring tools like Prometheus or Grafana. Basic knowledge of networking concepts (VPCs, subnets, firewalls). Any AWS, GCP, or CNCF certification is a plus. (ref:hirist.tech)
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Senior Manager, DevOps at o9, you will play a crucial role in leading a team of talented SRE professionals to ensure the smooth functioning of the organization's policies and procedures related to change management, configuration management, release and deployment management, service monitoring, problem management, and supporting the o9 Digital Brain Platform across major cloud providers like AWS, GCP, Azure, and Samsung Cloud using cutting-edge CI/CD tools. Your responsibilities will include deploying, maintaining, and supporting o9 Digital Brain SaaS environments on various cloud platforms, managing the SRE team to ensure quality and efficiency, hiring and nurturing SRE talent, leading the planning, building, configuring, testing, and deployment of software and systems for platform infrastructure and applications, collaborating with internal and external stakeholders to support o9 platform deployment needs, and enhancing system performance and reliability. To excel in this role, you should possess strong skills in operating system concepts, Linux, and troubleshooting, along with expertise in automation and cloud technologies. A bachelor's degree in computer science, Software Engineering, Information Technology, Industrial Engineering, or Engineering Management is required, along with over 10 years of experience in managing high-performing teams in roles such as SRE Manager or DevOps Manager. Additionally, certification in cloud technologies and Kubernetes administration is preferred. Your experience should include over 8 years in an SRE role, deploying and maintaining applications, performance tuning, application upgrades, and supporting CI/CD tooling, as well as over 10 years of experience in deploying and maintaining applications on cloud platforms. Proficiency in tools like Jenkins, Ansible, Terraform, and ArgoCD, administration of databases, and working knowledge of Linux and Windows operating systems are essential. Your decision-making, problem-solving, critical thinking, and testing skills will be crucial in delivering technical solutions independently with minimal supervision. At o9, we offer a flat organizational culture, great colleagues, and a fun work environment where you can truly make a difference. We prioritize team spirit, transparency, and frequent communication, regardless of hierarchy or distance. If you have a passion for learning, adapting to new technologies, and thrive in an international working environment, we welcome you to join us on our journey of driving 10x improvements in enterprise decision-making through AI-powered management. Apply now and be part of our diverse and inclusive team at o9!,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an AVP, Cloud Network Engineer at Synchrony, you will be responsible for leading the design, implementation, and optimization of Cloud-based network solutions within the Connectivity organization. Your role will involve collaborating with technical engineering and development teams in a matrixed environment, utilizing strong communication skills, project management expertise, and Agile methodologies. Your key responsibilities will include developing and executing a multi-cloud strategy aligned with business objectives, designing and implementing advanced network infrastructures, ensuring security and compliance with industry standards, optimizing networking solutions for cost and efficiency, and fostering cross-functional collaboration with Engineering, DevOps, and Operations teams. To qualify for this role, you should have a Bachelor's degree in computer science, engineering, or a related field, with a minimum of 5+ years of experience in IT infrastructure or cloud architecture. Strong knowledge of connectivity technologies such as AWS/Azure Public Cloud, DNS, Load Balancing, and IaC tools like Terraform and Ansible is required. Additionally, experience in highly regulated industries and expertise in designing and securing Public Cloud environments will be advantageous. The ideal candidate will possess strong interpersonal skills, a focus on compliance frameworks like PCI DSS and GDPR, and a proactive attitude towards staying updated on industry trends and emerging technologies. This role requires working from 3 PM to 12 AM IST. For internal applicants, it is essential to understand the mandatory skills required for the role, inform your Manager or HRM before applying, update your Professional Profile on Workday, and meet the eligibility criteria specified. Grade/Level for this position is 11 within the Information Technology job family group at Synchrony.,
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Techwave , we are always in an exercise to foster a culture of growth, and inclusivity. We ensure whoever is associated with the brand is being challenged at every step and is provided with all the necessary opportunities to excel in life. People are at the core of everything we do. Join us! https://techwave.net/join-us/ Who are we? Techwave is a leading global IT and engineering services and solutions company revolutionizing digital transformations. We believe in enabling clients to maximize the potential and achieve a greater market with a wide array of technology services, including, but not limited to, Enterprise Resource Planning, Application Development, Analytics, Digital, and the Internet of things (IoT). Founded in 2004, headquartered in Houston, TX, USA, Techwave leverages its expertise in Digital Transformation, Enterprise Applications, and Engineering Services to enable businesses accelerate their growth. Plus, we're a team of dreamers and doers who are pushing the boundaries of what's possible. And we want YOU to be a part of it. Role : Linux Admin Experience : 5+ Years Location : Hyderabad Job Summary: Seeking a Linux Admin with strong experience in Tomcat , JavaEE apps , and SQL Server . Must support SOAP-based services (Apache Axis 1.4), JSP frontends, and applications using Spring 3.x , Struts 1.x , and Hibernate . Responsibilities Manage RHEL VMs, Linux mounts, and application-related static files Configure and maintain Apache Tomcat (non-clustered, behind NetScaler load balancer with sticky sessions) Support deployment and runtime of SOAP-based JavaEE applications using JSP, jQuery, Axis 1.4 Monitor and troubleshoot application issues (logs, configs, ports, sessions, etc.) Support apps using Spring 3.x, Hibernate, and Struts 1.x (for specific modules) Interact with MS SQL Server (basic SQL queries, schema validation) Coordinate with teams using SAP HANA / SAP PI services Maintain ActiveMQ (single topic usage) Use Ansible for server and middleware configuration updates Collaborate with teams using Bitbucket and Maven for code/deployment Skills: 5–7 years of Linux (RHEL) administration experience Solid hands-on experience with Tomcat (incl. load balancer config, SSL, session management) Exposure to JavaEE app architecture; familiarity with Spring, Hibernate, Struts is a plus Understanding of SOAP-based services (Axis 1.4), JSP, and client-side tech (jQuery) Experience with MS SQL Server and basic SQL Working knowledge of tools like Maven, Bitbucket, ActiveMQ, and Ansible Strong troubleshooting and communication skills
Posted 1 week ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Description We are a fastest growing partner to the world’s major cloud provider, AWS. Workmates is looking for passionate and customer obsessed AWS Solutions Architect to join our AWS Practice. You will help drive innovation, build differentiated solutions, and define new customer experiences for our customers. You’ll work with smart people across our industry specialist organizations, and technology groups to help our customers get the most out of AWS in their cloud journey. Choosing Workmates and the AWS Practice will take your AWS experience and skills to the next level and allow you to work in an innovative and collaborative environment. At Workmates, you can lead the world’s AWS growing partner on the path to native cloud transformation and help us on the leading edge of cloud. Choose Workmates and make delivering innovative work part of your extraordinary career. At Workmates, we believe that people manpower are our biggest assets. Let's bring-in a best in class cloud native operations together. Come join our mission of building innovations across Cloud Management, Media, DevOps, Automation, IoT, Security, and more. Be a part of it where independence and ownership are respected so you can be you. Role Description Build and maintain the cloud infra environments Ensuring availability, performance, security, and scalability of production systems. Collaborate with application teams to apply DevOps practices in the development lifecycle Ability to create solution prototype and conduct proof of concept of new tools Design repeatable, automated, and scalable processes to increase efficiency and improve software quality such as manage Infrastructure as Code & work on internal tooling which simplifies workflows. Automate and streamline our operations and processes. Troubleshoot and diagnose issues / outages and Provide operational support Engage in incident handling, especially support a culture of post-mortem and knowledge sharing Requirements 2+ years of hands-on working experience in building & supporting large scale environments Strong Architecting and Implementation Experience with AWS Cloud is mandatory. Experienced with AWS CloudFormation and Terraform. Experience in Docker Containers, build and deployment in a container environment. You must have a good understanding and work experience in Kubernetes, and EKS. Sysadmin, infrastructure background (Linux internals, filesystems, networking) Experience with scripting and should be capable of writing Bash scripts. Should know how to check-in code, peer review and work well with distributed teams. HandsOn experience of CI/CD pipeline build and release. Strong experience of one of the CICD tools Jenkins/GitLab/TravisCI. HandsOn experience of AWS Developers tools like, AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS step function, etc. Experience in log management solution (ELK/EFK or similar). Experience in Configuration Management tools - Ansible or similar. Experience using modern Monitoring and Alerting tools (CloudWatch, Prometheus, Grafana, Opsgenie etc.) Passionate to automate routine tasks, solve production issues. Experience in automation testing, script generation and integration with CI/CD. Experienced with AWS Security (IAM, Security Groups, KMS, etc.) Good to have experience in database technologies (MongoDB/MySQL, etc.) Good to have AWS Professional Certifications. CKA/CKAD Certifications. Knowledge of Python/Go. Experience with Service Mesh and Distributed tracing. Knowledge of Scrum/Agile methodology check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#3945A0;border-color:#3945A0;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Full stack Engineer works with minimal supervision to work on end-to-end application design, development and maintenance activities. Work in an agile manner and can change priorities depending on the criticalities. Works with the rest of the team members to have seamless integration. Takes ownership and completes till the features are deployed. Requirements To be successful in this role, you should meet the following requirements : Proficiency in Nifi & Scripting knowledge in Python or groovy Java and Spring boot framework. PostgreSQL & Knowledge of Microservices. GitHub, Jenkins, Ansible etc. Understanding of CI/CD concept. Good Technical Design, Problem Solving and debugging skills Good communication skills and should be able to take ownership. Good to Have: Exposure to Any cloud platform such as GCP, AWS JIRA automation and confluence Experience in ETL, Ingestion or similar experience is preferred. Angular/React framework for UI development. GitHub co pilot You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a venture-backed, stealth-stage technology company focused on building next-generation matchmaking and relationship platforms, our mission revolves around reimagining how people connect by leveraging AI, community, and content. Our goal is to create an unparalleled user experience where individuals resonate with the sentiment: "This app gets me." In the role of Founding DevOps Engineer, you will be instrumental in shaping the reliability, scalability, and developer velocity of our platforms right from the outset. Your responsibilities will include designing, implementing, and overseeing scalable and highly available cloud infrastructure on platforms such as GCP and/or AWS. Additionally, you will be tasked with constructing and managing CI/CD pipelines utilizing tools like Jenkins, GitLab CI/CD, GitHub Actions, or similar solutions. Managing containerized environments through Docker and Kubernetes, specifically EKS/GKE, will also fall within your purview. You will develop Infrastructure as Code (IaC) with tools like Terraform, Ansible, and CloudFormation, and establish observability using tools such as Prometheus, Grafana, ELK Stack, or equivalent offerings. Furthermore, you will be responsible for optimizing database infrastructure for systems like MongoDB, PostgreSQL, Redis, and Cassandra, which includes handling backups, replication, scaling, and monitoring. Collaboration with backend, iOS, QA, and security teams to streamline and secure software delivery processes will be crucial. Implementing and upholding DevSecOps best practices across environments and workflows, driving automation, environment standardization, and cost optimization, as well as taking ownership of uptime, incident response, rollback planning, and postmortems are integral parts of this role. **Key Responsibilities:** - Design, implement, and manage scalable, highly available cloud infrastructure on GCP and/or AWS. - Build and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, GitHub Actions, or similar tools. - Manage containerized environments using Docker and Kubernetes (EKS/GKE preferred). - Develop Infrastructure as Code (IaC) using Terraform, Ansible, and CloudFormation. - Set up observability using tools like Prometheus, Grafana, ELK Stack, or similar. - Manage and optimize database infrastructure for systems like MongoDB, PostgreSQL, Redis, and Cassandra. - Collaborate closely with backend, iOS, QA, and security teams to streamline and secure software delivery. - Implement and enforce DevSecOps best practices across environments and workflows. - Drive automation, environment standardization, and cost optimization across infra components. - Own uptime, incident response, rollback planning, and postmortems. **Required Skills & Qualifications:** - 2-7 years of hands-on experience in DevOps, SRE, or Infrastructure Engineering roles. - Strong command of AWS or GCP services - compute, networking, IAM, monitoring, etc. - Experience building and scaling CI/CD pipelines for rapid, safe releases. - Solid knowledge of Kubernetes, Helm, and container orchestration best practices. - Proficient in scripting and automation using Python, Bash, or Go. - Experienced in Infrastructure-as-Code (Terraform, Ansible, CloudFormation). - Solid grasp of networking fundamentals, DNS, firewalls, security groups, load balancers, etc. - Comfortable with monitoring, logging, and alerting stacks (ELK, Prometheus, Datadog, etc.). - Strong debugging, incident resolution, and system design thinking. - Bias for ownership, hands-on mindset, and ability to thrive in ambiguity. **Nice to Have:** - Exposure to startup environments or zero-to-one infra builds. - Interest in privacy, security, or compliance in consumer apps. - Familiarity with cost optimization, autoscaling, or spot instance strategies. - Worked with mobile app backend systems (push infra, image/video processing, etc.). - Experience with ML Ops pipelines (e.g., model deployment, versioning, monitoring). - Understanding of GPU optimization and autoscaling for AI/ML workloads. **Why Join Us ** - Be the first DevOps hire and architect a modern infrastructure from scratch. - Build infrastructure that supports real-time AI + social interactions at scale. - Founding team of repeat founders and technology leaders backed by top-tier investors. - Autonomy + velocity + product-first culture + global ambition. - All Engineers get ESOPs.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for leading automation, CI/CD, and cloud infrastructure initiatives by collaborating with Development, QA, Security, and IT Operations teams. Your role will involve a combination of hands-on implementation, strategic architecture planning, mentoring, and on-call support. Your expertise in containers, CI tools, and version control will be crucial in ensuring reliability, scalability, and continuous improvement. Your key responsibilities will include designing, building, and maintaining CI/CD pipelines using Jenkins or an equivalent tool, and seamlessly integrating them with Git for code version control. You will also work on containerization and orchestration tasks, such as creating Docker images, managing container lifecycles, and deploying and scaling services in Kubernetes clusters, which are typically self-managed or cloud-managed. Furthermore, you will be involved in cloud infrastructure provisioning and automation using IaC tools like Terraform or Ansible to provision compute, networking, and storage resources in AWS, Azure, or GCP cloud environments. Implementing monitoring, logging, and observability solutions like Prometheus, ELK, Grafana, or their equivalents will be part of your responsibilities to monitor performance, set alerts, and troubleshoot production issues. Additionally, you will play a crucial role in ensuring system reliability and incident management by participating in on-call rotations, conducting root cause analysis, and overseeing post-incident remediation. Your responsibilities will also include implementing security and compliance practices, such as DevSecOps principles like container image scanning, IAM policies, secrets management, and vulnerability remediation. Moreover, as a senior member of the team, you will provide mentorship and leadership by guiding junior team members, suggesting process improvements, and aiding in transitioning manual workflows to automated pipelines. Your proficiency in the following technical skills will be essential for this role: - Containers: Docker (4/5) - Image builds, Docker-compose, multistage CI integrations - Orchestration: Kubernetes (3.5/5) - Daily operations in clusters, deployments, Helm usage - Version Control: Git (4/5) - Branching strategy, pull requests, merge conflict resolution - CI/CD Automation: Jenkins (4/5) - Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code - Cloud Platforms: AWS / Azure / GCP (4/5) - Infrastructure provisioning, cost optimization, IAM setup - Scripting & Automation: Python, Bash, or equivalent - Writing automation tools, CI hooks, server scripts - Infrastructure as Code: Terraform, Ansible, or similar - Declarative templates, module reuse, environment isolation - Monitoring & Logging: Prometheus, ELK, Grafana, etc. - Alert definitions, dashboards, log aggregation Your expertise in these areas will be instrumental in driving the success of the automation, CI/CD, and cloud infrastructure initiatives within the organization.,
Posted 1 week ago
7.0 - 15.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for creating scalable native cloud applications using one of the hyperscalers such as AWS, Azure, or GCP. Your role will involve developing and maintaining highly available systems on Kubernetes. You will need to effectively demonstrate and communicate proposed solutions to stakeholders. Additionally, setting project goals, ensuring timely execution, mentoring engineers for continuous technical improvement, and collaborating with project management to monitor progress and implementation are key aspects of this role. To excel in this position, you must have 7 to 15 years of relevant work experience along with a qualification in BCA, B.Tech, or MCA. Your skill set should include proficiency in AWS, Azure, GCP, Docker, Kubernetes, Jenkins, Harness, Git, Chef, Puppet, and Ansible. Strong troubleshooting and analytical skills are essential, along with excellent communication, collaboration, and client management abilities.,
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As an Azure - Development Consultant/Senior Consultant/Specialist at Hitachi Solutions India Pvt Ltd, you will be responsible for leveraging your 3 to 8 years of experience to work with Microsoft Azure Stack IoT Hub, Event Hub, Azure Functions, Azure Logic Apps, Azure Data Factory, Cosmos DB, Azure data bricks, Azure ML, IoT Edge, Docker, and Python. You will utilize your expertise in Serverless Functions (Using Java/Python), App services, Azure Kubernetes services, and more to configure, monitor, and handle scale scenarios for Network Service Components on Azure. Your experience in implementing Azure security, authentication, and single sign-on will be beneficial for the role. In addition to your primary skills, you will also be expected to deploy cloud applications in Azure following industry best practices, design cloud applications considering design patterns and principles, and ensure the performance tuning of cloud-based applications for optimal performance. Your familiarity with agile/DevOps environments, continuous integration/deployment, and automation tools like Jenkins, Puppet, Ansible, and Terraform will be valuable assets. Candidates with Microsoft certification in Azure Development will be preferred. Your role will involve implementing highly scalable solutions, translating business requirements into technical designs, understanding and improvising current applications and technical architectures, and following a structured approach to software development lifecycle. Strong code review skills will be essential to ensure project quality and resolve performance issues. If you have experience working with Azure Kubernetes, DevOps, microservices, Azure Service, Media services, Azure service bus, Azure storage, Azure AI modules, and leveraging Cloud-Based Machine Learning on Azure, you will be well-suited for this position. Your commitment to continuous learning and growth will be supported by our dynamic work environment, experienced leadership team, and regular training opportunities. Join us at Hitachi Solutions India Pvt Ltd to be part of a team that values creativity, innovation, and personal/professional growth. Your contributions will play a crucial role in our pursuit of profitable growth and expanded opportunities, as we strive to deliver superior value to customers worldwide through innovative solutions.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a highly motivated software engineer for the NVIDIA NetQ team, you will be responsible for working on a cutting-edge Network management and Telemetry system in the cloud. This system is designed using modern principles at an internet scale. NVIDIA NetQ offers a highly scalable network operations toolset that provides real-time visibility, troubleshooting, and validation for Cumulus fabrics. By utilizing telemetry, NetQ delivers actionable insights regarding the health of your data center network and seamlessly integrates the fabric into your DevOps ecosystem. Your primary responsibilities will include building and maintaining essential infrastructure components such as NoSQL databases (Cassandra, Mongo), TSDB, and Kafka. You will also be tasked with maintaining CI/CD pipelines to automate the build, test, and deployment processes. Additionally, you will work on enhancing automation for manual workflows through tools like Jenkins, Ansible, and Terraform. Ensuring security by performing scans and handling security vulnerabilities for infrastructure components will be a crucial part of your role. Moreover, you will facilitate the triage and resolution of production issues to enhance system reliability and customer service. To be successful in this role, you should possess at least 5 years of experience in complex microservices based architectures along with a Bachelor's degree. Proficiency in Kubernetes and Docker/containerd is essential, as is familiarity with modern deployment architectures for non-disruptive cloud operations, including blue-green and canary rollouts. You should be an automation expert with hands-on experience in frameworks like Ansible and Terraform. Strong knowledge of NoSQL databases (preferably Cassandra), Kafka/Kafka Streams, and Nginx is required. Expertise in cloud platforms such as AWS, Azure, or GCP is also necessary, along with a solid programming background in languages like Scala or Python. Understanding best practices for managing a highly available and secure production infrastructure is crucial for this role. To differentiate yourself, consider showcasing experience with APM tools like Dynatrace, Datadog, AppDynamics, or New Relic. Skills in Linux/Unix Administration, familiarity with Prometheus/Grafana, and experience in implementing highly scalable log aggregation systems using ELK stack or similar technologies can make you stand out. Moreover, proficiency in implementing robust metrics collection and alerting infrastructure will be advantageous. Joining NVIDIA means becoming part of a team at the forefront of technological innovation. As a company known for its forward-thinking and dedicated workforce, we are always looking for creative, passionate, and self-motivated individuals to contribute to our groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. If you are eager to be part of this exciting journey, we encourage you to apply and join us in shaping the future of technology. (Note: Job Reference Number - JR1998880),
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |