Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At Broadridge, we've built a culture where the highest goal is to empower others to accomplish more. If you’re passionate about developing your career, while helping others along the way, come join the Broadridge team. The role will work in a hybrid cloud (On-prem/AWS) which merges software and systems engineering, to build and run large-scale, distributed, fault-tolerant systems. The successful candidate will have at least 10+ years of experience in development disciplines with 5 years of that in the field of DevOps Engineering and play an important role in designing and delivering solutions that accelerate the speed of confidence of delivery to production: Must have: AWS – 5+ years commercial experience with (SSM, Parameter Store, Glue, IAM, Route53, CloudFormation, VPC, Security Groups, Subnets, ECS, EKS, ASG, ELB, lambda functions and API Gateway), HashiCorp Vault, Sonatype, Nexus, JFrog, Artifactory, Docker, TFS. In depth knowledge of CI/CD pipelines and experience with one or more common tools like Jenkins, TeamCity, Gitlab, Chef and/or AWS CodePipeline. Proficient with one or more types of Scripting (Java and/or Python, Groovy, Shell, Bash, Ruby, PowerShell) in windows/Linux platforms. Experience of building DevOps environments, Scripting and automation (Infrastructure as Code), with skills in Terraform and/or Ansible. Work on ways to automate and improve current deployment and release processes (both on-prem and AWS) to reduce lead time of business changes, increase deployment frequency. Experience in leading a small team and driving change to improve DevOps practices We are looking for a candidate with a can-do attitude to bring energy and passion to the role Must be able to work from home in a focused environment when access to Broadridge offices are limited/locked down The candidate will be working actively with the team in UK, so there has to be availability of common timeslots to join calls or coordinate with UK team. Nice to have: Experience in the financial industry is a plus, working knowledge of web technologies (.Net, IIS, SQL, JavaScript, JSON, HTML, CSS) Experience with Microservices, API gateway Experience of databases like AWS Aurora/RDS/Postgresql. Experience with Monitoring tools such as Datadog, CA-APM, Splunk, AWS CloudWatch Familiarity of AWS alerting services Show more Show less
Posted 3 weeks ago
6.0 - 11.0 years
10 Lacs
Hyderābād
On-site
Experience- 6-11 years Work Location- Hyderabad or Greater Noida JD- Skill Set (Must Have) Setting up and managing firewalls (Fortinet, Forcepoint) Working with routers and switches VLAN creation and management Setting up site-to-site and client VPNs Cloud Networking - VPC, Subnets, Transit Gateway, Load balancer Monitoring network performance and troubleshooting connectivity issues Job Types: Full-time, Permanent Pay: From ₹1,000,000.00 per year Schedule: Rotational shift Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your current CTC? What is your expected CTC? How many years of experience do you have in Fortinet or Forcepoint firewall? How many years of experience do you have in Cloud Networking? Which level are you supporting? Have you applied or given the interview in Coforge recently? What is your notice period/ LWD? Education: Bachelor's (Required) Experience: total work: 6 years (Required) Work Location: In person
Posted 3 weeks ago
1.5 years
0 Lacs
Hyderābād
On-site
Job Description: Job Purpose Intercontinental Exchange, Inc. (ICE) presents a unique opportunity to work with cutting-edge technology and business challenges in the financial services sector. ICE team members work across departments and traditional boundaries to innovate and respond to industry demand. A successful candidate will be able to multitask in a dynamic team-based environment demonstrating strong problem-solving and decision-making abilities and the highest degree of professionalism. We are seeking an experienced AWS solution design engineer/architect to join our infrastructure cloud team. The infrastructure cloud team is responsible for internal services that provide developer collaboration tools, the build and release pipeline, and shared AWS cloud services platform. The infrastructure cloud team enables engineers to build product features and efficiently and confidently them into production. Responsibilities Develop utilities or furthering existing application and system management tools and processes that reduce manual efforts and increase overall efficiency Build and maintain Terraform/CloudFormation templates and scripts to automate and deploy AWS resources and configuration changes Experience reviewing and refining design and architecture documents presented by teams for operational readiness, fault tolerance and scalability Monitor and research cloud technologies and stay current with trends in the industry Participate in an on-call rotation and identify opportunities for reducing toil and avoiding technical debt to reduce support and operations load. Knowledge and Experience Essential 1.5+ years of experience in an DevOps, preferably DevSecOps, or SRE role in an AWS cloud environment. 1.5+ years’ strong experience with configuring, managing, solutioning, and architecting with AWS (Lambda, EC2, ECS, ELB, EventBridge, Kinesis, Route 53, SNS, SQS, CloudTrail, API Gateway, CloudFront, VPC, TransitGW, IAM, Security Hub, Service Mesh) Python, or Golang proficiency. Proven background of implementing continuous integration, and delivery for projects. A track record of introducing automation to solve administrative and other business as usual tasks. Beneficial Proficiency in Terraform, CloudFormation, or Ansible A history of delivering services developed in an API-first approach. Coming from a system administration, network, or security background. Prior experience working with environments of significant scale (thousands of servers)
Posted 3 weeks ago
1.5 years
0 Lacs
Hyderābād
On-site
Job Description: Job Purpose Intercontinental Exchange, Inc. (ICE) presents a unique opportunity to work with cutting-edge technology and business challenges in the financial services sector. ICE team members work across departments and traditional boundaries to innovate and respond to industry demand. A successful candidate will be able to multitask in a dynamic team-based environment demonstrating strong problem-solving and decision-making abilities and the highest degree of professionalism. We are seeking an experienced AWS solution design engineer/architect to join our infrastructure cloud team. The infrastructure cloud team is responsible for internal services that provide developer collaboration tools, the build and release pipeline, and shared AWS cloud services platform. The infrastructure cloud team enables engineers to build product features and efficiently and confidently them into production. Responsibilities Develop utilities or furthering existing application and system management tools and processes that reduce manual efforts and increase overall efficiency Build and maintain Terraform/CloudFormation templates and scripts to automate and deploy AWS resources and configuration changes Experience reviewing and refining design and architecture documents presented by teams for operational readiness, fault tolerance and scalability Monitor and research cloud technologies and stay current with trends in the industry Participate in an on-call rotation and identify opportunities for reducing toil and avoiding technical debt to reduce support and operations load Knowledge and Experience Essential The applicant is expected to have the following skills and experience on appointment: 1.5+ years of experience in a DevOps, preferably DevSecOps, or SRE role in an AWS cloud environment. 1.5+ years’ strong experience with configuring, managing, solutioning, and architecting with AWS (Lambda, EC2, ECS, ELB, EventBridge, Kinesis, Route 53, SNS, SQS, CloudTrail, API Gateway, CloudFront, VPC, TransitGW, IAM, Security Hub, Service Mesh) Python, or Golang proficiency Proven background of implementing continuous integration, and delivery for projects A track record of introducing automation to solve administrative and other business as usual tasks Beneficial The applicant will receive extra consideration if they the following skills and experience: Proficiency in Terraform, CloudFormation, or Ansible A history of delivering services developed in an API-first approach Coming from a system administration, network, or security background Prior experience working with environments of significant scale (thousands of servers)
Posted 3 weeks ago
2.0 years
0 Lacs
India
Remote
This isn't your typical DevOps role. This is your chance to engineer the backbone of a next-gen AI-powered SaaS platform —where modular agents drive dynamic UI experiences, all running on a serverless AWS infrastructure with a Salesforce and SaaS-native backend. We're not building features—we're building an intelligent agentic ecosystem . If you've led complex multi-cloud builds, automated CI/CD pipelines with Terraform, and debugged AI systems in production, this is your arena. About Us We're a forward-thinking organization on a mission to reshape how businesses leverage cloud technologies and AI. Our approach is centered around delivering high-impact solutions that unify platforms across AWS, enterprise SaaS, and Salesforce. We don't just deliver software; we craft robust product ecosystems that redefine user interactions, streamline processes, and accelerate growth for our clients. The Role We are seeking a hands-on Agentic AI Ops Engineer who thrives at the intersection of cloud infrastructure , AI agent systems , and DevOps automation . In this role, you will build and maintain the CI/CD infrastructure for Agentic AI solutions using Terraform on AWS , while also developing, deploying, and debugging intelligent agents and their associated tools . This position is critical to ensuring scalable, traceable, and cost-effective delivery of agentic systems in production environments. The Responsibilities CI/CD Infrastructure for Agentic AI Design, implement, and maintain CI/CD pipelines for Agentic AI applications using Terraform , AWS CodePipeline , CodeBuild , and related tools. Automate deployment of multi-agent systems and associated tooling, ensuring version control, rollback strategies, and consistent environment parity across dev/test/prod Agent Development & Debugging Collaborate with ML/NLP engineers to develop and deploy modular, tool-integrated AI agents in production. Lead the effort to create debuggable agent architectures , with structured logging, standardized agent behaviors, and feedback integration loops. Build agent lifecycle management tools that support quick iteration, rollback, and debugging of faulty behaviors Monitoring, Tracing & Reliability Implement end-to-end observability for agents and tools, including runtime performance metrics , tool invocation traces , and latency/accuracy tracking . Design dashboards and alerting mechanisms to capture agent failures, degraded performance, and tool bottlenecks in real-time. Build lightweight tracing systems that help visualize agent workflows and simplify root cause analysis Cost Optimization & Usage Analysis Monitor and manage cost metrics associated with agentic operations including API call usage , toolchain overhead , and model inference costs . Set up proactive alerts for usage anomalies , implement cost dashboards , and propose strategies for reducing operational expenses without compromising performance Collaboration & Continuous Improvement Work closely with product, backend, and AI teams to evolve the agentic infrastructure design and tool orchestration workflows . Drive the adoption of best practices for Agentic AI DevOps , including retraining automation, secure deployments, and compliance in cloud-hosted environments. Participate in design reviews, postmortems, and architectural roadmap planning to continuously improve reliability and scalability Requirements 2+ years of experience in DevOps, MLOps, or Cloud Infrastructure with exposure to AI/ML systems . Deep expertise in AWS serverless architecture , including hands-on experience with: AWS Lambda - function design, performance tuning, cold-start optimization. Amazon API Gateway - managing REST/HTTP APIs and integrating with Lambda securely. Step Functions - orchestrating agentic workflows and managing execution states. S3, DynamoDB, EventBridge, SQS - event-driven and storage patterns for scalable AI systems. Strong proficiency in Terraform to build and manage serverless AWS environments using reusable, modular templates Experience deploying and managing CI/CD pipelines for serverless and agent-based applications using AWS CodePipeline, CodeBuild, CodeDeploy , or GitHub Actions Hands-on experience with agent and tool development in Python , including debugging and performance tuning in production. Solid understanding of IAM roles and policies , VPC configuration, and least-privilege access control for securing AI systems. Deep understanding of monitoring, alerting, and distributed tracing systems (e.g., CloudWatch, Grafana, OpenTelemetry). Ability to manage environment parity across dev, staging, and production using automated infrastructure pipelines. Excellent debugging, documentation, and cross-team communication skills Benefits Health Insurance, PTO, and Leave time Ongoing paid professional training and certifications Fully Remote work Opportunity Strong Onboarding & Training program Work Timings - 1pm -10 pm IST Next Steps We're looking for someone who already embodies the spirit of a boundary-breaking AI Technologist—someone who's ready to own ambitious projects and push the boundaries of what LLMs can do. Apply Now : Send us your resume and answer a few key questions about your experience and vision Show Us Your Ingenuity : Be prepared to talk shop on your boldest AI solutions and how you overcame the toughest technical hurdles Collaborate & Ideate : If selected, you'll workshop a real-world scenario with our team—so we can see firsthand how your mind works This is your chance to leave a mark on the future of AI—one LLM agent at a time. We're excited to hear from you! Our Belief We believe extraordinary things happen when technology and human creativity unite. By empowering teams with generative AI, we free them to focus on meaningful relationships, innovative solutions, and real impact. It's more than just code—it's about sparking a revolution in how people interact with information, solve problems, and propel businesses forward. If this resonates with you—if you're driven, daring, and ready to build the next wave of AI innovation—then let's do this. Apply now and help us shape the future. About Expedite Commerce At Expedite Commerce, we believe that people achieve their best when technology enables them to build relationships and explore new ideas. So we build systems that free you up to focus on your customers and drive innovations. We have a great commerce platform that changes the way you do business! See more about us at expeditecommerce.com. You can also read about us on https://www.g2.com/products/expedite-commerce/reviews, and on Salesforce Appexchange/ExpediteCommerce. EEO Statement All qualified applicants to Expedite Commerce are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran's status or any other protected characteristic. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Gurgaon
On-site
Key Responsibilities Automate deployments utilizing custom templates and modules for customer environments on AWS. Architect AWS environment best practices and deployment methodologies. Create automation tools and processes to improve day to day functions. Educate customers on AWS and Rackspace best practices and architecture. Ensure the control, integrity, and accessibility of the cloud environment for the enterprise Lead Workload/Workforce Management and Optimization related tasks. Mentor and assist Rackers across the Cloud Function. Quality check development of technical training for all Rackers supporting Rackspace Supported CLOUD Products. Provide technical expertise underpinning communications targeting a range of stakeholders - from individual contributors to leaders across the business. Collaborate with Account Managers and Business Development Consultants to build strong customer relationships. Technical Expertise Experienced in solutioning and implementation of Green field projects leveraging IaaS, PaaS for Primary site and DR. Hands-on 4+ years of recent technical experience in Professional Services/Enterprise DevOps/CloudOps, DevSecOps role Near expert knowledge of AWS Products & Services, Compute, Storage, Security, networking, etc. Proficient skills in at least one of the following languages: Python, Linux, Shell scripting. Proficient skills with git and git workflows. Excellent working knowledge of Windows or Linux operating systems – experience of supporting and troubleshooting issues and performance. Highly skilled in Terraform/IaC, including CI/CD practices. Strong technical skills and proficiency in multiple functional areas are required, such as infrastructure as Code (CloudFormation/Terraform), containerize orchestration (Docker, Kubernetes), configuration management (Ansible), programming language (Python), logging system (Elastic Stack), CI/CD and network protocols and standards. Experience in designing, building, implementing, analysing, Migrating and troubleshooting highly available systems. Strong knowledge of Continuous Integration tools such as GITLAB (mandatory), others like Jenkins, Maven, GitHub, Sona type nexus, developing CI and CD pipelines and continuous automation and should have hands on experience on writing automation scripts. Knowledge of at least one configuration management system such as Chef, Ansible, Puppet or any other such tools. Understanding of services and protocols, configuration, management, and troubleshooting of hosting environments, including web servers, databases, caching, and database services. Strong experience in configuring services, alerts, instrumentation on AppD, monitoring tools. Understanding requirements and experience using monitoring solutions such as AppDynamics, CloudWatch, ELK, Opensearch. Knowledge on AWS services like VPC, Route 53, EC2, EKS, RDS,API Gateways, Elastic Cache, Dynamo DB, Lambda Knowledge in the application of current and emerging network software and hardware technology and protocols. Skills Passionate about technology and has a desire to constantly expand technical knowledge
Posted 3 weeks ago
5.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 74065 Date: Feb 15, 2025 Location: Delhi Designation: Senior Consultant Entity: Primary skill 5-9 years of overall full stack - Java, Oracle/PLSQL 5+ years design principles, micro-services and cloud solution 8+ years of development experience with skills Java/Springboot , Rest APIs 2-4 years of development experience with AWS (S3, lambda, api gateway, EC2, CloudFront, Route53, Dynamo DB, VPC, ECS , EKS , subnets) 2-4 years of experience in developing containerized applications using Docker / Kubernetes 2+ years of experience with NodeJs or Python Knowledge of API gateway (e.g. Apigee, Layer 7) Should be able to provide technical design for the problems Hands on with coding and debugging. Should be able to write high quality code optimized for performance and scale Good understanding of infra-aspects of technical solutions like storage, platform, middleware Should have clear understating on continuous integration, build, release, code quality Well versed with security in the applications and shift left in CICD Good understating of load balancing, disaster recovery aspects of solutions Excellent communication, documentation and presentation skills Good knowledge on security aspects like authentication, authorization by using open standards like OAuth Good analytical- problem solving skills and should be good with algorithms Skills – nice to have: gRPC AWS dev ops Experience with CSS, Java Script/jQuery or any other Java Script framework/library/ e.g. Angular, React
Posted 3 weeks ago
0 years
3 - 5 Lacs
Coimbatore
On-site
Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Roles & Responsibilities : Up to Level 2 Support for the Bosch Network infrastructure services with a focus on SD-WAN, our Global Backbone, as well as our central hubs in core locations worldwide. Build up and maintain monitoring & logging tools. Monitor performance, availability, and overall health of the network. Document and log issues and resolution steps. Acting in the operation of global IT services, solving problems, incidents, configurations, alerts, service requests and monitoring related to network services and solutions in Bosch datacenter networks worldwide and together with engineering teams, partners, and vendors. Escalate issues to the appropriate teams. Availability to work in shift hours, including weekends and holidays. Support projects like the rollout and implementation of the SD-WAN stack at Bosch locations. Work closely with Service Managers, Operation Managers and Engineering teams on opportunities for improvement. Work and collaborate on an international team operating, supporting, monitoring, implementing, replacing, extending, upgrading, and optimizing network solutions globally. Executing and optimizing operational processes, reviewing procedures and documents related to monitoring, supporting, and operating of the network solutions and technologies. Good understanding of cloud standards such as securing infrastructure, efficient operation of cloud resources configured in the application Azure and AWS Cloud Networking Experience (VNET, VPC, Subnet, Load Balancing, VNET peering, VPN) Knowledge of cloud networking concepts, security best practices, and compliance frameworks Experience with monitoring, logging, and performance optimization of cloud resources Proficient in automating infra deployments using IaC by leveraging Terraform, Cloud Formation Template, Biceps, Plumi Good knowledge of DevOps concepts and CI/CD practices to deploy infra services using pipelines and version controlling systems Deep understanding of Cloud Cost, Knowledge on cost optimization techniques to provide cost efficient infra solutions on Cloud Perform operations and administration support to the Azure and AWS VM’s and PaaS components Strong understanding across Azure infrastructure components (server, storage, network, database, and applications) to deliver end to end Cloud Infrastructure operations support Process oriented approach to meticulously handle tasks with high repetition Qualifications Degree in Computer Science, Network Analyst or equivalent Knowledge in networking technologies (e.g. OSPF, BGP, MPLS, QoS) Knowledge in VPN technologies (e.g. IPSec, SSL, DMVPN, GetVPN) Knowledge in SDWAN technologies and products (preferably Cisco Viptela) Broad knowledge in basic network and security concepts and implementations (NAT, DNS, Proxies, Load balancers, ACLs, etc.) Experience with major hardware and software platforms from Cisco (IOS, NX-OS) Preferrable: Fundamental Knowledge in software-driven networking (Python, Ansible, CI/CD, GIT) Azure DevOps, Azure administrator, AWS SysOps Previous experience in implementation, operation, monitoring and support of network technologies and solutions. Knowledge in configuration and administration of network solutions and equipment, further networking protocols (IPsec, spanning-tree, mac, ARP…) as well Cisco ACI technology are welcomed. Experienced in network environments for support in troubleshooting, scalability issues, automation, and operation. Desirable knowledge in scripting (PowerShell, VBA, etc.) and programming languages (Phyton, Ansible, SQL, etc.). Knowledge of virtualization technologies (on premises and in the cloud) Previous experience working Monitoring and Operation of IT Infrastructure (cloud and on-premises) Certifications will be a differentiator (i.e. Cisco CCNA, ITIL etc)
Posted 3 weeks ago
5.0 years
2 - 3 Lacs
Ahmedabad
On-site
Job Title: DevOps Engineer Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a DevOps Engineer at Techblocks India, you will support infrastructure provisioning, automation, and continuous deployment pipelines to streamline and scale our development lifecycle. You’ll work closely with engineering teams to maintain a stable, high-performance CI/CD ecosystem and cloud infrastructure on GCP. Experience Required: 5+ years of hands-on DevOps experience with cloud and containerized deployments. Mandatory: OS: Linux Cloud: GCP (VPC, Compute Engine, GKE, GCS, IAM) CI/CD: Jenkins, GitHub Actions, Bitbucket Pipelines Containers: Docker, Kubernetes IaC: Terraform, Helm Monitoring: Prometheus, Grafana Version Control: Git Nice to Have: ELK Stack, Trivy, JFrog, Vault Basic scripting in Python or Bash Jira, Confluence Scope: Implement and support CI/CD pipelines Maintain development, staging, and production environments Optimize resource utilization and infrastructure costs Roles and Responsibilities: Assist in developing and maintaining CI/CD pipelines across various environments (dev, staging, prod) using Jenkins, GitHub Actions, or Bitbucket Pipelines. Collaborate with software developers to ensure proper configuration of build jobs, automated testing, and deployment scripts. Write and maintain scripts for infrastructure provisioning and automation using Terraform and Helm. Manage and troubleshoot containerized applications using Docker and Kubernetes on GCP. Monitor system health and performance using Prometheus and Grafana; raise alerts and participate in issue triage. Maintain secrets and configurations using Vault and KMS solutions under supervision. Participate in post-deployment verifications and rollout validation. Document configuration changes, CI/CD processes, and environment details in Confluence. Maintain Jira tickets related to DevOps issues and track resolutions effectively. Provide support in incident handling under guidance from senior team members. About TechBlocks TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. At TechBlocks, we believe technology is only as powerful as the people behind it. We foster a culture of collaboration, creativity, and continuous learning, where big ideas turn into real impact. Whether you're building seamless digital experiences, optimizing enterprise platforms, or tackling complex integrations, you'll be part of a dynamic, fast-moving team that values innovation and ownership. Join us and shape the future of digital transformation.
Posted 3 weeks ago
3.0 years
0 Lacs
Calcutta
On-site
We are seeking a DevOps Engineer with 3+ years of experience specializing in AWS, Git, and VPS management. The ideal candidate will be responsible for automating deployments, managing cloud infrastructure, and optimizing CI/CD pipelines for seamless development and operations. Key Responsibilities: ✅ AWS Infrastructure Management – Deploy, configure, and optimize AWS services (EC2, S3, RDS, Lambda, etc.). ✅ Version Control & GitOps – Manage repositories, branching strategies, and workflows using Git/GitHub/GitLab. ✅ VPS Administration – Configure, maintain, and optimize VPS servers for high availability and performance. ✅ CI/CD Pipeline Development – Implement automated Git-based CI/CD workflows for smooth software releases. ✅ Containerization & Orchestration – Deploy applications using Docker and Kubernetes. ✅ Infrastructure as Code (IaC) – Automate deployments using Terraform or CloudFormation. ✅ Monitoring & Security – Implement logging, monitoring, and security best practices. Required Skills & Experience: 3+ years of experience in AWS, Git, and VPS management. Strong knowledge of AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.). Expertise in Git and GitOps workflows. Hands-on experience with VPS hosting, Nginx, Apache, and server management. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). Knowledge of Infrastructure as Code (Terraform, CloudFormation). Strong scripting skills (Bash, Python, or Go). Preferred Qualifications: Experience with server security hardening on VPS servers. Familiarity with AWS Lambda & Serverless architecture. Knowledge of DevSecOps best practices. Job Types: Full-time, Permanent, Contractual / Temporary Benefits: Provident Fund Schedule: Day shift Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software engineer In this role you will: Be an approachable and supportive team member with a collaborative attitude within a demanding, maturing Agile environment Influence and champion new ideas and methods Great communication - convey your thoughts, ideas and opinions clearly and concisely face-to-face or virtually to all levels up and down stream And equally important - you listen and reflect what others communicate to you Regularly demonstrate these qualities - drive, motivation, determination, dedication, resiliency, honesty and enthusiasm Be culturally aware and sensitive Working closely in a cross functional product team you’ll make sure your product’s environments are working as intended and that the build pipeline between them is tuned to perfection. With the rest of the team, you’re responsible for the quality of everything produced; if you build it, you run it. One day you’ll be helping deploy a new dynamic platform, the next watching it handle millions of requests from all around the globe. We’re using things like Adobe Experience Manager, AWS, Content Delivery Networks, AppDynamics, Splunk, AWS, Jenkins, GitHub, Slack, IBM portal server, IBM WebSphere content management system. Be flexible under pressure Strong analytical and problem-solving skills Excellent verbal and written communication skills Excellent organizational and presentation skills Ability to communicate with non-technical people You’re a kind, thoughtful person who people enjoy working with. Requirements To be successful in this role you should meet the following requirements: Working knowledge of one of Java, JavaScript/Node.js or Python scripting language Basic knowledge of AWS Cloud and AWS Services like VPC, CloudFront, CloudWatch, Lambda, S3 etc. or any other cloud environment Experience with agile development tools like GIT, Visual Code, IntelliJ, Confluent and JIRA Understanding of DevOps and Infrastructure as a Code concepts and Terraform Experienced in full automation and configuration management desirable Understanding and working experience with CI/CD and available tools i.e. usage of Jenkins, Sonar etc. Working experience in an agile environment Strong ability to quickly learn new skills and tools Ability to troubleshoot application issues in timely manner Be a clear communicator, document your work, share your ideas Review and be reviewed by your peers Experience deploying to production You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working, and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Job Summary: We are seeking a skilled DevOps Engineer to design, deploy, and maintain scalable cloud-based infrastructure and applications. The ideal candidate will have hands-on experience with AWS, Kubernetes, containerization, and database management, with a focus on automation, reliability, and efficiency. You’ll collaborate with cross-functional teams to ensure seamless integration of development and operations, driving innovation and operational excellence. You will be responsible for automating and streamlining our deployment processes, managing our Kubernetes environment, optimizing resource utilization, and implementing robust monitoring solutions. Experience: 4 - 6 years Responsibilities: Design, build, deploy, and manage scalable and reliable cloud infrastructure (AWS) using IaC (Infrastructure as Code) tools. Build and optimize containerized environments using Docker, Kubernetes, and orchestration tools for applications like MongoDB, PostgreSQL, React, Node.js (TypeScript), .NET Core, and PHP. Implement and monitor CI/CD pipelines for continuous delivery and deployment. Ensure high availability, scalability, and cost optimization of cloud resources. Manage database replication and disaster recovery strategies for MongoDB and PostgreSQL. Monitor cloud infrastructure and applications performance using tools like Prometheus, Grafana, or CloudWatch. Troubleshoot performance bottlenecks and optimize resource utilization (CPU, memory, storage). Collaborate with developers, SREs, and QA teams to align DevOps practices with business goals. Stay updated on emerging cloud technologies and best practices. Implement and enforce security best practices across our infrastructure and applications. Automate repetitive tasks and processes using scripting and automation tools. Contribute to the documentation of infrastructure, processes, and best practices. Analyze resource utilization patterns and implement strategies for cost optimization and efficiency. Qualifications: Must Have: 4+ years of experience in a DevOps, SRE, cloud engineering or similar role. Extensive knowledge and hands-on experience with Amazon Web Services (AWS) cloud platform, including services like EC2, ECS/EKS, S3, RDS, VPC, IAM, etc. Strong understanding and practical experience with Kubernetes for container orchestration and management. Deep understanding of resource utilization concepts and methodologies for optimizing performance and cost. Proven experience in containerizing various applications and technologies, including MongoDB, PostgreSQL, React, Node.js with TypeScript, .NET Core APIs, and PHP using Docker or similar technologies. Experience in setting up and managing database replication for MongoDB and PostgreSQL. Proficient in implementing and managing application and infrastructure monitoring solutions using tools like Prometheus, Grafana, CloudWatch, or similar. Strong understanding of networking principles and security best practices in cloud environments. Excellent problem-solving and troubleshooting skills. Ability to work independently and as part of a team. Good communication and collaboration skills. Scripting: Strong scripting skills in Bash, Python, or PowerShell for automation. Good To Have: Experience with Infrastructure-as-Code (IaC) tools such as Helm, Terraform, Ansible, CloudFormation, or similar. Proficiency in scripting languages such as Python, Bash, or Go. Experience with CI/CD tools like Jenkins, GitLab CI, CircleCI, or GitHub Actions. Knowledge of configuration management tools like Ansible, Chef, or Puppet. Experience with log management and analysis tools like ELK stack (Elasticsearch, Logstash, Kibana) or Splunk. Experience with serverless technologies like AWS Lambda and API Gateway. Understanding of agile development methodologies. Experience with database administration tasks. Cloud Security: Knowledge of IAM roles, encryption, and compliance (e.g., GDPR, SOC2). Familiarity with performance testing and optimization techniques. Cloud Cost Optimization: Experience with cost management tools (e.g., AWS Cost Explorer). Nice To Have: Certifications: AWS Certified Solutions Architect, Kubernetes (CKA/CKAD), or Azure/GCP certifications. Cloud Security Tools: Experience with tools like AWS WAF, Shield, or cloud-native security frameworks. Experience with: Serverless databases (e.g., DynamoDB, Aurora Serverless). Automated testing frameworks for infrastructure (e.g., InSpec, Terraform Validate). GitOps practices (e.g., Flux, Argo CD). Company Benefits: Employees at Blobstation enjoy a full range of benefits, such as: 5 days a week Health Insurance Sponsorship towards training & certification Flexible working hours Flexibility to work from home Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Mohali district, India
On-site
𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗥𝗼𝗹𝗲: We looking for a highly experienced and innovative Senior DevSecOps & Solution Architect to lead the design, implementation, and security of modern, scalable solutions across cloud platforms. The ideal candidate will bring a unique blend of DevSecOps practices, solution architecture, observability frameworks, and AI/ML expertise — with hands-on experience in data and workload migration from on-premises to cloud or cloud-to-cloud. You will play a pivotal role in transforming and securing our enterprise-grade infrastructure, automating deployments, designing intelligent systems, and implementing monitoring strategies for mission-critical applications. 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽: • Own CI/CD strategy, automation pipelines, IaC (Terraform, Ansible), and container • orchestration (Docker, Kubernetes, Helm). • Champion DevSecOps best practices – embedding security into every stage of the SDLC. • Manage secrets, credentials, and secure service-to-service communication using Vault, • AWS Secrets Manager, or Azure Key Vault. • Conduct infrastructure hardening, automated compliance checks (CIS, SOC 2, ISO • 27001), and vulnerability management. • Solution Architecture: • Architect scalable, fault-tolerant, cloud-native solutions (AWS, Azure, or GCP). • Design end-to-end data flows, microservices, and serverless components. • Lead migration strategies for on-premises to cloud or cloud-to-cloud transitions, • ensuring minimal downtime and security continuity. • Create technical architecture documents, solution blueprints, BOMs, and migration • playbooks. • Observability & Monitoring: • Implement modern observability stacks: OpenTelemetry, ELK, Prometheus/Grafana, • DataDog, or New Relic. • Define golden signals (latency, errors, saturation, traffic) and enable APM, RUM, and log • aggregation. • Design SLOs/SLIs and establish proactive alerting for high-availability environments. 𝗔𝗜/𝗠𝗟 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 &𝗮𝗺𝗽; 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: • Integrate AI/ML into existing systems for intelligent automation, data insights, and • anomaly detection. • Collaborate with data scientists to operationalize models using MLflow, SageMaker, • Azure ML, or custom pipelines. • Work with LLMs and foundational models (OpenAI, Hugging Face, Bedrock) for POCs or • production-ready features. • Migration & Transformation: • Lead complex data migration projects across heterogeneous environments — legacy • systems to cloud, or inter-cloud (e.g., AWS to Azure). • Ensure data integrity, encryption, schema mapping, and downtime minimization • throughout migration efforts. • Use tools such as AWS DMS, Azure Data Factory, GCP Transfer Services, or custom • scripts for lift-and-shift and re-architecture. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 &𝗮𝗺𝗽; 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: • 10+ years in DevOps, cloud architecture, or platform engineering roles. • Expert in AWS and/or Azure – including IAM, VPC, EC2, Lambda/Functions, S3/Blob, API • Gateway, and container services (EKS/AKS). • Proficient in infrastructure as code: Terraform, CloudFormation, Ansible. • Hands-on with Kubernetes (k8s), Helm, GitOps workflows. • Strong programming/scripting skills in Python, Shell, or PowerShell. • Practical knowledge of AI/ML tools, libraries (TensorFlow, PyTorch, scikit-learn), and • model lifecycle management. • Demonstrated success in large-scale migrations and hybrid architecture. • Solid understanding of application security, identity federation, and compliance. Familiar with agile practices, project estimation, and stakeholder communication. 𝗡𝗶𝗰𝗲 𝘁𝗼 𝗛𝗮𝘃𝗲: • Certifications: AWS Solutions Architect, Azure Architect, Certified Kubernetes Admin, or similar. • Experience with Kafka, RabbitMQ, event-driven architecture. • Exposure to n8n, OpenFaaS, or AI agents. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a highly skilled and motivated Associate Manager AWS Site Reliability Engineer (SRE) to join our team. As an Associate Manager AWS SRE, you will play a critical role in designing, managing, and optimizing our cloud infrastructure to ensure high availability, reliability, and scalability of our services. You will collaborate with cross-functional teams to implement best practices, automate processes, and drive continuous improvements in our cloud environment Responsibilities Design and Implement Cloud Infrastructure: Architect, deploy, and maintain AWS infrastructure using Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation. Monitor and Optimize Performance: Develop and implement monitoring, alerting, and logging solutions to ensure the performance and reliability of our systems. Ensure High Availability: Design and implement strategies for achieving high availability and disaster recovery, including backup and failover mechanisms. Automate Processes: Automate repetitive tasks and processes to improve efficiency and reduce human error using tools such as AWS Lambda, Jenkins, and Ansible. Incident Response: Lead and participate in incident response activities, troubleshoot issues, and perform root cause analysis to prevent future occurrences. Security and Compliance: Implement and maintain security best practices and ensure compliance with industry standards and regulations. Collaborate with Development Teams: Work closely with software development teams to ensure smooth deployment and operation of applications in the cloud environment. Capacity Planning: Perform capacity planning and scalability assessments to ensure our infrastructure can handle growth and increased demand. Continuous Improvement: Drive continuous improvement initiatives by identifying and implementing new tools, technologies, and processes. Qualifications Experience: 10+ years of experience and Minimum of 5 years of experience in a Site Reliability Engineer (SRE) or DevOps role, with a focus on AWS cloud infrastructure. Technical Skills: Proficiency in AWS services such as EC2, S3, RDS, VPC, Lambda, CloudFormation, and CloudWatch. Automation Tools: Experience with Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation, and configuration management tools like Ansible or Chef. Scripting: Strong scripting skills in languages such as Python, Bash, or PowerShell. Monitoring and Logging: Experience with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or CloudWatch. Problem-Solving: Excellent troubleshooting and problem-solving skills, with a proactive and analytical approach. Communication: Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Certifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator are highly desirable. Education: Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Strong DevOps knowledge, hands-on experience in CI/CD (Any CI/CD tools). •Extensive experience in Linux system administration, Networking and troubleshooting. - Experience in shell, YAML, JSON, groovy scripting. • Strong experience in AWS (EC2, S3, VPC, RDS, IAM, Organisation, Identity Center Etc.,) •Ability to setup CI/CD pipeline using AWS services or other CI/CD tools •Experience in configuring and troubleshooting on EKS & ECS and deploying applications on an EKS & ECS cluster. •Strong hands-on knowledge in Terraform and/or AWS CFT. Must be able to automate AWS infrastructure provisioning using Terraform/CFT in most efficient way. •Experience in Cloudwatch / Cloudtrail / Prometheus-Grafana for infra/application monitoring. - Most importantly, must have great soft skills, critical & analytical thinking in a larger scope, ability quickly, efficiently understand, identify and solve problems. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About The Role Grade Level (for internal use): 11 The Role : Lead Cloud/ SRE / DevOps Engineer The Location : Gurgaon, Hyderabad Grade : 11 Summary Join a pioneering team dedicated to building and enhancing Generative AI platform at S&P Global, serving over 30,000 employees. This technical role involves designing and developing cutting-edge software, including web applications, data pipelines, big data, AI technologies, and multi-cloud solutions. It’s an opportunity to drive growth, advance your skills, and transform our approach to Generative AI. The Team Our global team is tasked with the architecture, design, development, quality assurance, and maintenance of internal Generative AI-based platforms. Recognized for expertise, innovation, and passion, you will work collaboratively to achieve ambitious goals and push the boundaries of technology. The Impact You will play a key role in developing a state-of-the-art Generative AI platform that empowers our 30,000+ internal users. Your contributions will also extend to leading or participating in workshops aimed at broadening the adoption of Generative AI across various roles within the company. What’s In It For You Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the product’s future as part of a skilled team. Work Flexibility: Benefit from a flexible work arrangement, balancing office time with the option to work from home. Community Engagement: Utilize five paid days for charity work or volunteering, supporting your passion for community service. Responsibilities Design and implement cloud solutions using AWS and Azure. Develop and maintain Infrastructure as Code (IAC) with Terraform. Create and manage CI/CD pipelines using GitHub Actions and Azure DevOps. Automate deployment processes and provisioning of compute instances and storage. Orchestrate container deployments with Kubernetes. Develop automation scripts in Python, PowerShell, and Bash. Monitor and optimize cloud resources for performance and cost-efficiency using tools like Datadog and Splunk. Configure Security Groups, IAM policies, and roles in AWS\Azure. Troubleshoot production issues and ensure system reliability. Collaborate with development teams to integrate DevOps and MLOps practices. Create comprehensive documentation and provide technical guidance. Continuously evaluate and integrate new AWS services and technologies Cloud engineering certifications (AWS, Terraform) are a plus. Excellent communication and problem-solving skills. Minimum Qualifications Bachelor’s Degree in Computer Science or equivalent experience. Minimum of 8+ years in cloud engineering, DevOps, or Site Reliability Engineering (SRE). Hands-on experience with AWS and Azure cloud services, including IAM, Compute, Storage, ELB, RDS, VPC, TGW, Route 53, ACM, Serverless computing, Containerization, CloudWatch, CloudTrail, SQS, and SNS. Experience with configuration management tools like Ansible, Chef, or Puppet. Proficiency in Infrastructure as Code (IAC) using Terraform. Strong background in CI/CD pipelines using GitHub Actions and Azure DevOps. Knowledge of MLOps or LLMops practices. Proficient in scripting languages: Python, PowerShell, Bash. Ability to work collaboratively in a fast-paced environment. Preferred Qualifications Advanced degree in a technical field. Extensive experience with ReactJS and modern web technologies. Proven leadership in agile and project management. Advanced knowledge of CI/CD and industry best practices in software development. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 313429 Posted On: 2025-05-09 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Andhra Pradesh, India
On-site
10+ years of experience in DevOps engineering with a focus on AWS cloud environments. Strong expertise in AWS services such as EC2, S3, Lambda, RDS, VPC, ECS, EKS, CloudFormation, CloudWatch, etc. Hands-on experience with CI/CD tools like Jenkins, AWS CodePipeline, GitLab CI, CircleCI, or similar. Strong experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or similar. Experience in setting up, configuring, and managing AWS security best practices, including IAM roles, encryption, and key management. Strong knowledge of Linux/Unix systems, including configuration, troubleshooting, and performance optimization. Experience with containerization (Docker, Kubernetes, EKS) and deploying applications in containerized environments. Proficiency in scripting and automation using languages like Bash, Python, Go, or Ruby. Understanding of monitoring and alerting systems, with experience using AWS CloudWatch, Prometheus, Grafana, etc. Ability to work in an Agile development environment, collaborating with development, QA, and product teams. Excellent troubleshooting, debugging, and problem-solving skills. Strong communication skills and the ability to lead and mentor teams Show more Show less
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Mumbai
Work from Office
We are searching for talented individuals for the role of AWS Cloud Admin. Come join us!! Location: Kolkata & Mumbai Role & Responsibilities: Manage AWS environments (EC2, S3, VPC, RDS, and more). Oversee VPC, cloud security, and AWS tools (CloudFormation, CloudWatch, IAM). Troubleshoot AWS infrastructure and network issues. Handle cloud migrations , disaster recovery, and backup strategies. Create and maintain technical documentation and SOPs. Work with cloud security services like WAF and Guard Duty. What we expect: AWS Certified SysOps Administrator Associate certification is a must. Expert-level knowledge of AWS Control Tower, Landing Zone, and CloudFormation. Ability to troubleshoot infrastructure, network, and OS issues in AWS. Strong communication skills to articulate cloud solutions to management and stakeholders. Note: we are organizing a campus drive at Vikhroli, Mumbai on 31st May 2025, those who can attend kindly apply or share your resumes at rubinas@godrej.com
Posted 3 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Bachelor in Engineering - Computers/Electronics/Communication or related field Graduate/Post Graduate in Science/Maths/IT or related streams with relevant technology experience Minimum 5 years of progressive, relevant experience and proven capability to work in a complex network environment Responsibilities Engineers who have a passion for providing outstanding customer service. 24x7 support of Enterprise Networks of large global clients that have a distributed LAN/Wireless/DDI setup Will be a part of a team of who are responsible for handling (switching/wireless/DDI) network operational/problem management issues. Ticket resolution - work and resolve trouble tickets, handle ticket escalation Queue Management - monitor ticket queue, ensure assignment/resolution and closure Create Method Of Procedure and/or Standard Operating Procedure document Plan and execute Change Management processes Performance Tuning of network devices and create Service Improvement Plans Plan and perform firmware upgrade Work with hardware/software vendors to resolve problems Train and mentor juniors Act as an SME SPOC for certain network products Interface with customer on calls and lead technical meetings Assist in Root Cause Analysis (RCA) Provide technical inputs for weekly/monthly customer service review reports What You'll Be Doing 1.1.2 PRE-REQUISITES Technical expertise on all or either of the following platform Cisco catalyst and Nexus switching and wireless Arista switching HP/Aruba switching and wireless Meraki switching and wireless Mist wireless Infoblox DDI Conceptually strong in the following switching and wireless technology HSRP/VRRP STP/VTP VSS/VPC Ether-Channels, Stacking of switches (IRF, Cisco Stack) Stand-alone AP, IAP, Flex-connects, Wireless bridge 1.1.3 Responsibilities Engineers who have a passion for providing outstanding customer service. 24x7 support of Enterprise Networks of large global clients that have a distributed LAN/Wireless/DDI setup Will be a part of a team of who are responsible for handling (switching/wireless/DDI) network operational/problem management issues. Ticket resolution - work and resolve trouble tickets, handle ticket escalation Queue Management - monitor ticket queue, ensure assignment/resolution and closure Create Method Of Procedure and/or Standard Operating Procedure document Plan and execute Change Management processes Performance Tuning of network devices and create Service Improvement Plans Plan and perform firmware upgrade Work with hardware/software vendors to resolve problems Train and mentor juniors Act as an SME SPOC for certain network products Interface with customer on calls and lead technical meetings Assist in Root Cause Analysis (RCA) Provide technical inputs for weekly/monthly customer service review reports 1.1.4 TRAINING AND CERTIFICATION Cisco certification and DDi/Aruba /Juniper certification will be added advantage 1.1.5 Experience Minimum 5 years of progressive, relevant experience and proven capability to work in a complex network environment 1.1.6 EDUCATION Bachelor in Engineering - Computers/Electronics/Communication or related field Graduate/Post Graduate in Science/Maths/IT or related streams with relevant technology experience 1.1.7 Other Skills Good communication skills - written as well as verbal Passion to work on core technology platform ITIL process awareness Workplace type: About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? In this position we are looking for a strong DevOps Engineer to work with RnD DevOps teams, Cloud DevOps, and LOBs. Managing hybrid-multi cloud environment, infra & DevOps solutions. The Engineer will work with Israel and Pune RnD Team as well as other support teams across the Globe. We are seeking a talented DevOps Engineer to join our team. As a DevOps Engineer, you will be responsible for implementing, and maintaining our continuous integration and delivery pipeline, as well as managing our infrastructure and ensuring its reliability, scalability, and security. We encourage Innovative ideas, Flexible work methods, Knowledge collaboration, good vibes! How will you make an impact? Implement, and manage the continuous integration and delivery pipeline to automate software delivery processes. Collaborate with software developers to ensure that new features and applications are deployed in a reliable and scalable manner. Define and own the AWS environment strategy including optimizing the usage Automation of DevOps pipeline and provisioning of environments Manage and maintain our cloud infrastructure, including provisioning, configuration, and monitoring of servers and services. Provide technical guidance and support to other members of the team. Design, implement, and manage Docker containers and Kubernetes clusters to support our microservices architecture and containerized applications. Develop and maintain Docker images and Kubernetes deployment configurations, including pods, services, deployments, and persistent volumes. Implement and manage networking, storage, security, and monitoring solutions for Docker and Kubernetes environments. Collaborate with software developers to containerize applications and optimize their performance for Docker and Kubernetes. Automate deployment, scaling, and management tasks using Docker Compose, Kubernetes operators, Helm charts, and other tools. Troubleshoot and resolve issues related to Docker containers, Kubernetes clusters, networking, and application deployment. Have you got what it takes? 2-3 years of experience as a DevOps engineer with AWS cloud Strong understanding of Kubernetes & Docker, Jenkins, Ansible, Terraform, AWS Strong understanding of DevOps tools such as Kubernetes, Maven, Ant, NANT MSbuild, Code security – dynamic and static scans, GitHub, GitHub Actions, and logging mechanisms. Working knowledge of AWS Services including aspects of EC2, VPC, S3, Lambda, RDS, Kafka, IAM and others. Exposure to enterprise software architectures, infrastructures, and integration with AWS (or any other cloud solution) Experience with Application Monitoring Metrics Should have good knowledge on shell scripting, Python, and power shell. Should have good knowledge on Linux and windows servers. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices. Consistently demonstrates clear and concise written and verbal communication. Passionately enthusiastic about DevOps & cloud technologies. Ability to work independently, multi-task, and take ownership of various parts of a project or initiative. Certifications such as Docker Certified Associate (DCA) or Certified Kubernetes Administrator (CKA) or AWS Associate Certified is good to have. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7179 Reporting into: Tech Manager Role Type: Individual Contributor About NICE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NICE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NICE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NICE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have: Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps ? 1. Register on our Soul AI website. 2. Our team will review your profile. 3 . Clear all the screening round s: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4 . Profile matching and Project Allocatio n: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: GCP Cloud Architect Location: Hyderabad Notice period: Immediate joiners needed. Shift timings: US Time zones Work Mode: Work from Office Job description: Opportunity: We are seeking a highly skilled and experienced GCP Cloud Architect to join our dynamic technology team. You will play a crucial role in designing, implementing, and managing our Google Cloud Platform (GCP) infrastructure, with a primary focus on building a robust and scalable Data Lake in BigQuery. You will be instrumental in ensuring the reliability, security, and performance of our cloud environment, supporting critical healthcare data initiatives. This role requires strong technical expertise in GCP, excellent problem-solving abilities, and a passion for leveraging cloud technologies to drive impactful solutions within the healthcare domain. Responsibilities: Cloud Architecture & Design: Design and architect scalable, secure, and cost-effective GCP solutions, with a strong emphasis on BigQuery for our Data Lake. Define and implement best GCP infrastructure management, security, networking, and data governance practices. Develop and maintain comprehensive architectural diagrams, documentation, and standards. Collaborate with data engineers, data scientists, and application development teams to understand their requirements and translate them into robust cloud solutions. Evaluate and recommend new GCP services and technologies to optimize our cloud environment. Understand and implement the fundamentals of GCP, including resource hierarchy, projects, organizations, and billing. GCP Infrastructure Management: Manage and maintain our existing GCP infrastructure, ensuring high availability, performance, and security. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or Cloud Deployment Manager. Monitor and troubleshoot infrastructure issues, proactively identifying and resolving potential problems. Implement and manage backup and disaster recovery strategies for our GCP environment. Optimize cloud costs and resource utilization, including BigQuery slot management. Collaboration & Communication: Work closely with cross-functional teams, including data engineering, data science, application development, security, and compliance. Communicate technical concepts and solutions effectively to both technical and non-technical stakeholders. Provide guidance and mentorship to junior team members. Participate in on-call rotation as needed. Develop and maintain thorough and reliable documentation of all cloud infrastructure processes, configurations, and security protocols. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 5-8 years of experience in designing, implementing, and managing cloud infrastructure, with a strong focus on Google Cloud Platform (GCP). Proven experience in architecting and implementing Data Lakes on GCP, specifically using BigQuery. Hands-on experience with ETL/ELT processes and tools, with strong proficiency in Google Cloud Composer (Apache Airflow). Solid understanding of GCP services such as Compute Engine, Cloud Storage, Networking (VPC, Firewall Rules, Cloud DNS), IAM, Cloud Monitoring, and Cloud Logging. Experience with infrastructure-as-code (IaC) tools like Terraform or Cloud Deployment Manager. Strong understanding of security best practices for cloud environments, including identity and access management, data encryption, and network security. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, collaboration, and interpersonal skills. Bonus Points: Experience with Apigee for API management. Experience with containerization technologies like Docker and orchestration platforms like Cloud Run. Experience with Vertex AI for machine learning workflows on GCP. Familiarity with GCP Healthcare products and solutions (e.g., Cloud Healthcare API). Knowledge of healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR). GCP Professional Architect certification. Experience with scripting languages (e.g., Python, Bash). Experience with Looker. Please share your resume to nitinkumar.b@apollohealthaxis.com Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility Experience: 3 to 5 Years Tools & Technologies: - Site Reliability Engineering, AWS, PCF, DevOps, Tomcat 9, Nginx Webserver, and Unix OS (Linux), Splunk, New Relic, Load balancer etc.., Putty, WINSCP, Shell Scripting, Stone Branch. Job Responsibilities Diagnose and resolve technical issues with AWS services, applications, and infrastructure, often involving deep dives into logs and system analysis. Resolving technical issues reported by users or detected through monitoring tools. Assist with the deployment and configuring/setup of cloud resources (e.g., EC2, S3, VPC), including virtual machines, storage, and networking, ensuring proper configuration and integration. Monitor cloud infrastructure performance metrics, identify bottlenecks, and recommend solutions for improved performance and cost-efficiency. Develop and implement automation scripts and tools to streamline support processes and automate infrastructure provisioning. Experience in CI/CD pipelines for code deployment using (CloudFormation, Ansible Git, Maven, SonarQube, Gradle, Nexus, Bitbucket, UDeployer, Fortify, DevOps components) with YAML and Classic Editor. Having Pipeline scripting knowledge like Groovy, Shell scripts, Python. Writing shell Scripts and adding to Cron job to automate daily maintenance tasks like log rotation, report transfer, Server Rolling bounce, file system cleanups and alerts etc. Primary Skills That Must Be For The Candidate Good knowledge on Migration tasks, including planning, execution and documentation. Must have experience on cloud technologies like PCF/AWS. Managing and resolving incidents, including escalating issues as needed and Analysing system performance data and logs to identify trends and forecast needs. Setting up a High Availability Environments using external Hardware F5 load balancer, Nginx Web servers, http Session replications and Clustering of Weblogic Server & Services like JDBC, JMS etc. Skills with M/O flag are part of Specialization Capacity Management -PL2 (Functional) Win the Customer -PL2 (Behavioural) One Birlasoft -PL2 (Behavioural) Results Matter -PL2 (Behavioural) Get Future Ready -PL2 (Behavioural) Availability Management -PL3 (Functional) Service Level Management -PL2 (Functional) Incident Management -PL3 (Functional) IT Infrastructure -PL3 (Functional) Help the tribe -PL2 (Behavioural) Think Holistically -PL2 (Behavioural) GCP-Administration - PL3 (Mandatory) GCP-DevOps - PL2 (Optional) GCP-IaC - PL3 (Mandatory) Linux administration - PL2 (Optional) Wintel Administration - PL2 (Optional) Show more Show less
Posted 3 weeks ago
1.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are looking for a proactive and technically sound Service Engineer with hands-on experience in analytical instruments used in pharmaceutical production environments. The ideal candidate will be responsible for calibration, installation and servicing of instruments at client locations, while also supporting project execution with senior engineers. Strong communication skills and a solid understanding of pharma guidelines are essential. Key Responsibilities: Perform calibration of field instruments like NVPC, LPC, GIT, FIT, etc. Handle equipment installation, breakdown support, and assist senior engineers on projects. Ensure compliance with pharmaceutical manufacturing regulations and GMP standards. Travel to various pharma client sites. Prepare documentation and reports using MS Office tools. Key Performance Expectations (KPIs) Independently conduct NVPC and VPC calibrations within 3 months. Understand the working principles of all our instruments within the first 3 months. Demonstrate strong knowledge of pharma industry regulations and guidelines. Working Conditions: Frequent travel via bus, train, or Bike within 100 KM radius. Carry and manage calibration kits during field visits. Flexibility to work on holidays or late hours based on client needs. Requirements: Diploma or Degree in Instrumentation/Electronics or related field. 1-3 years of experience in calibration/service of analytical instruments (preferred) Strong technical and communication skills. Willingness to travel and work independently. Interested candidates can share resume at careers@shreedhargroup.com #service #fieldservice #engineer #electrical #instrumentation #elecyronics #electronicsandcomminication #electronicsandinstrumentatin #fieldserviceengineer Show more Show less
Posted 3 weeks ago
12.0 - 15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role: Technical Lead Role type: Full-time / Permanent Industry: Telecom/IoT/Electronics/Semiconductor · Work Experience: 12-15 Years · Job Location: Kochi, India Role Overview: As a Technical Lead at Cavli Wireless, you will lead the design, development, and deployment of scalable cloud-based solutions. Collaborating with cross-functional teams, you will ensure the seamless integration of cloud technologies to support our IoT products and services. Architectural Design : Spearheading the design and implementation of cloud infrastructure and application architectures, ensuring they are scalable, secure, and highly available. Technical Leadership : Providing guidance and mentorship to development teams, fostering a culture of continuous improvement and adherence to best practices. Code Quality Assurance : Conducting thorough code reviews, debugging sessions, and knowledge-sharing initiatives to maintain high-quality code standards. Requirements Analysis : Collaborating with stakeholders to gather and translate business requirements into technical specifications and actionable tasks. Project Management : Defining project scope, timelines, and deliverables in coordination with stakeholders, ensuring alignment with business objectives. Best Practices Implementation : Advocating for and implementing industry best practices in coding, testing, and deployment processes. Version Control Management : Utilizing code versioning tools like GitHub to manage and track code changes effectively. 🛠️ Technical Proficiency The ideal candidate should possess: Frontend Technologies : Expertise in the Angular framework for building dynamic and responsive user interfaces. Backend Technologies : Proficiency in Node.js for server-side development. Programming Languages : Strong command over TypeScript and JavaScript; working knowledge of Python. Cloud Platforms : Extensive experience with AWS services, including but not limited to: Compute : EC2, Lambda Storage : S3, RDS Networking : VPC, IoT Core Serverless : API Gateway, DynamoDB DevOps Tools : CodePipeline, CodeDeploy, CloudFormation Version Control Systems : Proficient in using GitHub for code versioning and collaboration. 🎓 Educational Qualification Required : Bachelor’s degree in Computer Science, Information Technology, or a related field (B.E./B.Tech/MCA). 👥 Leadership & Mentoring Lead by example, demonstrating technical excellence and a proactive approach to problem-solving. Mentor junior developers, providing guidance on technical challenges and career development. Foster a collaborative and inclusive team environment, encouraging open communication and knowledge sharing. Interested candidates can share their resumes @ benz.franco@cavliwireless.com or careers@cavliwireless.com Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a growing demand for professionals with expertise in Virtual Private Cloud (VPC) technology. As businesses continue to migrate to cloud-based solutions, the need for skilled individuals who can design, implement, and manage VPC environments has never been higher. Job seekers in India looking to pursue a career in VPC have a range of opportunities available to them.
The average salary range for VPC professionals in India varies based on experience levels. Entry-level positions can expect to earn between ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹15 lakhs per annum.
A typical career path in VPC jobs in India may start as a Junior VPC Engineer, progressing to roles such as VPC Administrator, VPC Architect, and finally reaching positions like VPC Manager or VPC Consultant.
In addition to expertise in VPC technology, professionals in this field are often expected to have knowledge of networking, security, cloud computing platforms, and infrastructure design.
As you prepare for VPC job interviews in India, make sure to brush up on your technical skills, stay updated with the latest trends in cloud computing, and showcase your problem-solving abilities. With dedication and perseverance, you can land a rewarding career in the thriving VPC job market in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane