Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 7.0 years
10 - 15 Lacs
Noida
Work from Office
As a Consultant in Automation domain, you will be responsible for delivering automation use cases enabled by AI and Cloud technologies. In this role, you play a crucial part in building the next-generation autonomous networks. You will develop efficient and scalable automation solutions, you will leverage your technical expertise, problem-solving abilities, and domain knowledge to drive innovation and efficiency. You have: Bachelor's degree in Computer Science, Engineering, or a related field preferred, with 8-10+ years of experience in automation or telecommunications. Understanding of telecom network architecture, including Core networks, OSS, and BSS ecosystems, along with industry frameworks like TM Forum Open APIs and eTOM. Practical experience in programming and scripting languages such as Python, Go, Java, or Bash, and automation tools like Terraform, Ansible, and Helm. Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or ArgoCD, as well as containerization (Docker) and orchestration (Kubernetes, OpenShift). It would be nice if you also had: Exposure to agile development methodologies and cross-functional collaboration. Experience with real-time monitoring tools (Prometheus, ELK Stack, OpenTelemetry, Grafana) and AI/ML for predictive automation and network optimization is a plus. Familiarity with GitOps methodologies and automation best practices for telecom environments. Design, develop, test, and deploy automation scripts using languages such as Python, Go, Bash, or YAML. Automate the provisioning, configuration, and lifecycle management of network and cloud infrastructure. Design and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, or Tekton. Automate continuous integration, testing, deployment, and rollback mechanisms for cloud-native services. Implement real-time monitoring, logging, and tracing using tools such as Prometheus, Grafana, ELK, and OpenTelemetry. Develop AI/ML-driven observability solutions for predictive analytics and proactive fault resolution, integrating AI/ML models to enable predictive scaling. Automate self-healing mechanisms to remediate network and application failures. Collaborate with DevOps and Network Engineers to align automation with business goals.
Posted 3 weeks ago
3.0 - 5.0 years
7 - 12 Lacs
Bengaluru
Work from Office
As a Software Engineer, you will be responsible for designing, developing, and maintaining cloud-native applications and platforms. You will work on cloud security, microservices, container orchestration, and API-driven architectures while implementing best practices in software design and system scalability. You have: Bachelor's or Masters degree in Electronics, Computer Science, Electrical Engineering, or a related field with 8+ years of work experience. Experience in container orchestration using Kubernetes, Helm, and OpenShift. Experience with API Gateway, Kafka Messaging, and Component Life Cycle Management. Expertise in Linux platform, including Linux Containers, Namespaces, and CGroups. Experience in scripting language Perl/ Python and CI/CD tools Jenkins, Git, Helm, and Ansible. It would be nice if you also had: Familiarity with open-source PaaS environments like OpenShift. Experience with evolutionary architecture and microservices development. You will design and develop software components based on cloud-native principles and leading PaaS platforms. You will Implement scalable, secure, and resilient microservices and cloud-based applications. You will develop APIs and integrate with API gateways, message brokers (Kafka), and containerized environments. You will apply design patterns, domain-driven design (DDD), component-based architecture, and evolutionary architecture principles. You will lead the end-to-end development of features and EPICs, ensuring high performance and scalability. You will define and implement container management strategies, leveraging Kubernetes, OpenShift, and Helm
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Bengaluru
Work from Office
- Azure Infra & DevOps - Terraform, YAML, CI/CD (Azure DevOps, GitHub, Bamboo) - Azure services, Helm, containers - AWS to Azure migration - Infra deployment, monitoring, troubleshooting Required Candidate profile - Azure Infra, DevOps, Terraform, YAML, CI/CD, and AWS to Azure migration. - Strong in containerized deployments, monitoring tools, and troubleshooting cloud environments.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Bengaluru
Work from Office
- Node.js, Python - AWS to Azure migration - Azure Functions, AKS, REST API - CI/CD, Helm, App Service - Code refactoring, SDK conversion - Unit testing, deployment & support Required Candidate profile - Must have hands-on AWS to Azure migration, Azure Functions, AKS, and cloud SDK conversion. - Good understanding of cloud infra, CI/CD, and containerized deployments.
Posted 3 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Pune
Work from Office
Primary Skills .NET Core and .NET Framework developmentIn-depth experience in building scalable and maintainable applications using C#. This includes web applications, APIs, background services, and integration with third-party systems. Azure App Services, Azure Functions, and Azure DevOpsHands-on expertise in deploying applications to Azure App Services, creating serverless workflows with Azure Functions, and managing end-to-end CI/CD pipelines using Azure DevOps. Docker containerization and image managementSkilled in writing Dockerfiles, building and managing container images, and using Docker Compose for multi-container applications. Ensures consistent environments across development, testing, and production. Kubernetes orchestration and deploymentProficient in deploying and managing containerized applications using Kubernetes. Experience includes writing YAML manifests for deployments, services, config maps, and secrets, as well as managing scaling, rolling updates, and health checks. CI/CD pipeline creation and managementCapable of designing and implementing automated pipelines for building, testing, and deploying applications. Familiar with tools like Azure DevOps, GitHub Actions, and Jenkins to ensure smooth and reliable delivery processes. RESTful API development and integrationStrong understanding of REST principles and experience in designing, building, and consuming APIs. Uses tools like Swagger/OpenAPI for documentation and Postman for testing and validation. Microservices architecture designExperience in designing and implementing microservices-based systems using .NET and Docker. Focuses on modularity, scalability, and resilience, with inter-service communication via HTTP or messaging systems. Infrastructure as Code (IaC)Skilled in automating infrastructure provisioning using tools like Bicep, ARM templates, or Terraform. Ensures consistent and repeatable deployments of Azure resources across environments. Secondary Skills Azure Monitor, Application Insights, and Log AnalyticsFamiliar with monitoring and diagnostics tools in Azure to track application performance, detect anomalies, and troubleshoot issues using telemetry and logs. Helm charts for Kubernetes deploymentsBasic to intermediate knowledge of using Helm to package, configure, and deploy Kubernetes applications, enabling reusable and version-controlled deployments. Git and version control best practicesProficient in using Git for source control, including branching strategies, pull requests, and code reviews to maintain code quality and collaboration. SQL and NoSQL database integrationExperience in integrating applications with databases like Azure SQL, PostgreSQL, and Cosmos DB. Capable of writing optimized queries and managing database connections securely. Security best practices in cloud and container environmentsUnderstanding of authentication, authorization, and secure communication practices. Familiar with managing secrets, certificates, and identity access in Azure and Kubernetes. Agile/Scrum methodologiesComfortable working in Agile teams, participating in sprint planning, daily stand-ups, retrospectives, and using tools like Azure Boards or Jira for task tracking. Unit testing and integration testing frameworksKnowledge of writing and maintaining tests using frameworks like xUnit, NUnit, or MSTest. Ensures code reliability and supports test-driven development practices. Basic networking and DNS concepts in cloud environmentsUnderstanding of virtual networks, subnets, firewalls, load balancers, and DNS configurations in Azure and Kubernetes to support application connectivity and security.
Posted 3 weeks ago
3.0 years
0 Lacs
India
On-site
Job description Job Overview: We are seeking a skilled Golang Developer to join our team and play a crucial role in optimizing, updating, and maintaining our cloud-based systems. The ideal candidate will have a deep understanding of cloud system architecture and experience in writing efficient, scalable, and maintainable Golang code. Experience Level: Minimum 3 years. Key Responsibilities: Develop, optimize, and maintain Golang-based applications in cloud environments. Analyze, refactor, and enhance existing Golang codebases to improve performance and scalability. Design and implement robust cloud system architectures, ensuring reliability, security, and efficiency. Work with DevOps and cloud engineering teams to deploy, monitor, and troubleshoot Golang applications. Ensure best coding practices, performance tuning, and adherence to security standards. Collaborate with cross-functional teams to identify and solve technical challenges. Optimize APIs, microservices, and database interactions for improved system performance. Stay up to date with Golang and cloud technologies to suggest and implement improvements. Required Skills & Qualifications: Strong proficiency in Golang with hands-on experience in writing production-grade applications. Solid understanding of cloud system architecture, including AWS, GCP, or Azure. Experience with microservices, serverless computing, and containerization (Docker, Kubernetes). Proficiency in working with SQL/NoSQL databases and optimizing queries. Familiarity with CI/CD pipelines, automation, and infrastructure as code. Strong debugging, profiling, and performance tuning skills. Knowledge of API design (REST, gRPC) and authentication mechanisms. Experience with monitoring, logging, and distributed tracing in cloud environments. Preferred Qualifications: Experience with message queues (Kafka, RabbitMQ, NATS). Familiarity with Golang frameworks such as Gin, Echo, or Fiber. Understanding of distributed systems and event-driven architecture. Hands-on experience with Terraform, Helm, or cloud-native technologies. Why Join Us? Work on cutting-edge cloud solutions and optimize high-performance systems. Opportunity to contribute to architectural decisions and system improvements. Collaborative team with a strong culture of innovation and continuous learning. Competitive salary, flexible work options, and career growth opportunities. Apply now and let’s build something amazing together! Show more Show less
Posted 3 weeks ago
6.0 - 11.0 years
20 - 35 Lacs
Gurugram
Work from Office
Kubernetes Admin Experience Level : Senior Level About Company: Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management. Our mission is to Empower and Enhance the lives of our customers, through efficient solutions for their complex business problems. At Nomiso we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace, thrives on ideas and opportunities. That is a part of our DNA. Were in pursuit of colleagues who share similar passions, are nimble and thrive when challenged. We offer a positive, stimulating and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged. We invite you to push your boundaries and join us in fulfilling your career aspirations! What You Can Expect from Us: Here at NomiSo, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great! Position Overview: We are seeking an experienced Senior Kubernetes Administrator with expertise in Rancher-managed Kubernetes clusters in an on-premises setup. This L3/SME role involves leading cluster architecture, automation, security hardening, and performance optimization, while mentoring the junior team and supporting critical workloads. Roles and Responsibilities: Architect, configure, and administer Rancher-managed on-prem Kubernetes clusters. Lead lifecycle management: upgrades, scaling, high availability, and disaster recovery. Build automation for cluster provisioning and configuration using RKE, Helm, Ansible, etc. Implement advanced RBAC, pod security policies, network policies, and secrets management. Manage persistent volumes, CSI drivers, and local/remote storage integrations. Lead root cause analysis for performance issues, failures, and outages. Integrate monitoring/logging tools (Prometheus, Grafana, Loki, ELK). Work closely with InfoSec teams for vulnerability management and compliance. Mentor junior admins and establish SOPs, architectural guidelines, and security baselines. Must Have Skills: 7–8 years of Linux and containerization experience, with 4+ years in Kubernetes (on-prem). Deep experience with Rancher UI/CLI, RKE, Helm, and K8s architecture- RKE 2 , Rancher Managed. Familiarity with RKE (Rancher Kubernetes Engine). Should have worked on Pod distribution budgets across K8s. Experience in Kafka – Apache/Confluent Should have experience in Prometheus, Open-Telemetry & Grafana extensively Proficient in scripting (Bash, Python) and automation tools like Ansible or Terraform. Expertise in cluster-level networking, ingress controllers, and service meshes (optional). Strong debugging and monitoring skills using Rancher-native and third-party tools. Hands-on with GitOps workflows and secure CI/CD pipelines. Good to Have Skills: Certified Kubernetes Administrator (CKA). Database experience would be a plus with flavours – PostgreSQL, Mongo, Oracle & Redis-Cache. Experience with bare metal provisioning, VM infrastructure, or storage systems. Soft Skills: Leadership in operational excellence and incident management. Strong communication with cross-functional teams and stakeholders. Ability to manage critical incidents and mentor junior engineers. Qualification: BE/BTech/MCA/ME/MTech/MS in Computer Science or a related technical field or equivalent practical experience. Location: Gurugaon / Onsite
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us IAMOPS is DevOps-focused services company helping startups and enterprises build scalable, reliable, and secure infrastructure. Our team thrives on solving complex infrastructure challenges, implementing automation, and working directly with clients to deliver value through modern DevOps practices. Job Summary We are seeking a highly capable and experienced Senior DevOps Engineer with 4–5 years of hands-on experience. The ideal candidate must possess deep knowledge of Linux systems, networking, scripting (Bash, Python), and automation tools, with the ability to take ownership of projects, collaborate directly with clients, and lead internal team efforts when required. You’ll play a key role in delivering DevOps solutions across various client environments while mentoring junior team members and driving technical excellence. Key Responsibilities Client-Facing DevOps Delivery: Work directly with client stakeholders to gather requirements, understand their infrastructure pain points, and deliver robust DevOps solutions. Linux & Networking Mastery: Architect and troubleshoot systems with a strong foundation in Linux internals, process management, network stack, routing, firewalls, etc. Automation & Scripting: Automate repetitive tasks using Bash and Python scripts. Maintain and extend reusable automation assets. Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Ansible, or similar. CI/CD Ownership: Build, maintain, and optimize CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, etc. Containerization & Orchestration: Deploy and manage applications using Docker and Kubernetes in production environments. Cloud Management: Architect and manage infrastructure across AWS, Azure, or GCP. Implement cost-effective and scalable cloud strategies. Monitoring & Logging: Implement observability stacks like Prometheus, Grafana, ELK, or cloud-native solutions. Mentorship: Guide and support junior engineers; contribute to knowledge-sharing, code reviews, and internal standards. Key Requirements 4–5 years of hands-on DevOps experience in production environments. Strong fundamentals in: Linux administration and troubleshooting. Computer Networking – firewalls, routing, DNS, load balancing, NAT, etc. Scripting – Bash (required), Python (preferred). Experience with: CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Docker and Kubernetes. Cloud platforms – AWS (preferred), GCP or Azure. Infrastructure-as-Code – Terraform, Ansible, or similar. Ability to work independently with clients, understand business needs, and translate them into technical solutions. Proven experience in collaborating with or leading small teams in a fast-paced environment. Nice to Have Cloud or Kubernetes certifications (AWS Certified DevOps Engineer, CKA, etc.) Familiarity with GitOps, Helm, and service mesh architectures. Exposure to monitoring tools like Datadog, New Relic, or OpenTelemetry. Soft Skills Strong communication skills (written and verbal) to interact effectively with clients and team members. Mature problem-solver who can anticipate issues and resolve them proactively. Organized and self-motivated with a willingness to take ownership of projects. Leadership potential with a collaborative team mindset. Why Join Us? Work with cutting-edge DevOps stacks and innovative startups globally. Be part of a collaborative, learning-focused culture. Opportunity to grow into technical leadership roles. Flexible working environment with a focus on outcomes. Skills: github actions,gitlab ci,jenkins,terraform,elk,aws,python,basic networking,devops,linux,grafana,automation,bash,ansible,azure,gcp,kubernetes,docker,prometheus,networking,infrastructure Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
About Us IAMOPS is DevOps-focused services company helping startups and enterprises build scalable, reliable, and secure infrastructure. Our team thrives on solving complex infrastructure challenges, implementing automation, and working directly with clients to deliver value through modern DevOps practices. Job Summary We are seeking a highly capable and experienced Senior DevOps Engineer with 4–5 years of hands-on experience. The ideal candidate must possess deep knowledge of Linux systems, networking, scripting (Bash, Python), and automation tools, with the ability to take ownership of projects, collaborate directly with clients, and lead internal team efforts when required. You’ll play a key role in delivering DevOps solutions across various client environments while mentoring junior team members and driving technical excellence. Key Responsibilities Client-Facing DevOps Delivery: Work directly with client stakeholders to gather requirements, understand their infrastructure pain points, and deliver robust DevOps solutions. Linux & Networking Mastery: Architect and troubleshoot systems with a strong foundation in Linux internals, process management, network stack, routing, firewalls, etc. Automation & Scripting: Automate repetitive tasks using Bash and Python scripts. Maintain and extend reusable automation assets. Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Ansible, or similar. CI/CD Ownership: Build, maintain, and optimize CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, etc. Containerization & Orchestration: Deploy and manage applications using Docker and Kubernetes in production environments. Cloud Management: Architect and manage infrastructure across AWS, Azure, or GCP. Implement cost-effective and scalable cloud strategies. Monitoring & Logging: Implement observability stacks like Prometheus, Grafana, ELK, or cloud-native solutions. Mentorship: Guide and support junior engineers; contribute to knowledge-sharing, code reviews, and internal standards. Key Requirements 4–5 years of hands-on DevOps experience in production environments. Strong fundamentals in: Linux administration and troubleshooting. Computer Networking – firewalls, routing, DNS, load balancing, NAT, etc. Scripting – Bash (required), Python (preferred). Experience with: CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Docker and Kubernetes. Cloud platforms – AWS (preferred), GCP or Azure. Infrastructure-as-Code – Terraform, Ansible, or similar. Ability to work independently with clients, understand business needs, and translate them into technical solutions. Proven experience in collaborating with or leading small teams in a fast-paced environment. Nice to Have Cloud or Kubernetes certifications (AWS Certified DevOps Engineer, CKA, etc.) Familiarity with GitOps, Helm, and service mesh architectures. Exposure to monitoring tools like Datadog, New Relic, or OpenTelemetry. Soft Skills Strong communication skills (written and verbal) to interact effectively with clients and team members. Mature problem-solver who can anticipate issues and resolve them proactively. Organized and self-motivated with a willingness to take ownership of projects. Leadership potential with a collaborative team mindset. Why Join Us? Work with cutting-edge DevOps stacks and innovative startups globally. Be part of a collaborative, learning-focused culture. Opportunity to grow into technical leadership roles. Flexible working environment with a focus on outcomes. Skills: github actions,gitlab ci,jenkins,terraform,elk,aws,python,basic networking,devops,linux,grafana,automation,bash,ansible,azure,gcp,kubernetes,docker,prometheus,networking,infrastructure Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Saviynt is an identity authority platform built to power and protect the world at work. In a world of digital transformation, where organizations are faced with increasing cyber risk but cannot afford defensive measures to slow down progress, Saviynt’s Enterprise Identity Cloud gives customers unparalleled visibility, control and intelligence to better defend against threats while empowering users with right-time, right-level access to the digital technologies and tools they need to do their best work. We are seeking a highly skilled Senior Cloud Security Engineer with 8 – 10 years of experience to join our team. The ideal candidate will have extensive expertise in cloud security, particularly in AWS and Azure environments, along with strong knowledge of Infrastructure as Code (IAC) using Terraform and Helm Charts. The Cloud Security Engineer will be responsible for ensuring the security and compliance of our cloud infrastructure, designing, and implementing security solutions, and collaborating with cross-functional teams to maintain a secure cloud environment. WHAT YOU WILL BE DOING Conduct in-depth penetration testing of cloud infrastructure, deployment models, and cloud-native services on AWS and Azure Perform security assessments and penetration testing on Kubernetes clusters (EKS and AKS), including container images and associated components Identify and exploit misconfigurations or vulnerabilities in Kubernetes clusters, workload security, and related cloud environments Analyse and prioritize vulnerabilities across AWS, Azure, and containerized deployments based on risk, impact, and business context Prepare comprehensive reports detailing findings, potential impacts, and actionable remediation steps. Communicate these reports effectively to both technical and non-technical stakeholders Collaborate with Cloud Ops, DevOps, and Cloud Engineering teams to provide expert guidance and support for remediating vulnerabilities in cloud infrastructure and containerized environments Leverage and customize industry-standard security tools (e.g., Trivy, kube-hunter, Aqua, Falco) and develop custom scripts or tools to enhance testing capabilities. Automate repetitive tasks to streamline penetration testing workflows Participate in threat modelling exercises to identify risks specific to AWS, Azure, EKS, and AKS environments Ensure all penetration testing activities adhere to industry standards and compliance frameworks, such as NIST, ISO 27001, CSA, and Kubernetes Security Best Practices Develop and communicate targeted remediation strategies for cloud and container security risks, ensuring alignment with organizational goals and business priorities Mentor and guide junior penetration testers, fostering continuous learning and professional growth in cloud and container security practices WHAT YOU BRING Bachelor’s degree in computer science, Information Technology, or related field. 8 to 10 years of experience in cloud security, with a focus on AWS and Azure platforms. Strong understanding of cloud security architecture, including network security, identity and access management, encryption, and data protection. In-depth knowledge of AWS and Azure Cloud Platform with strong understanding of security principles, standards, and frameworks (e.g., NIST, CIS, ISO 27001) as applied to cloud environments Hands-on experience with cloud native security services and solutions in AWS and Azure Strong understanding of the Kubernetes (EKS, AKS) and container security best practices Hands-on experience with cloud security tools and technologies, such as cloud security posture management (CSPM) and cloud workload protection platforms (CWPP) Proficiency in scripting and automation using Python, PowerShell, Bash Shell, or similar languages. Experience with Infrastructure as Code (IAC) tools like Terraform, Cloud Formation, Helm charts for provisioning and managing cloud resources. Relevant certifications such as AWS Certified Security - Speciality, Azure Security Engineer Associate, Certified Cloud Security Professional (CCSP), or equivalent are a plus. Excellent communication skills with the ability to collaborate effectively with cross-functional teams and stakeholders. Strong problem-solving skills and a proactive approach to identifying and resolving security issues Saviynt is an amazing place to work. We are a high-growth, Platform as a Service company focused on Identity Authority to power and protect the world at work. You will experience tremendous growth and learning opportunities through challenging yet rewarding work that directly impacts our customers, all within a welcoming and positive work environment. If you're resilient and enjoy working in a dynamic environment you belong with us! Saviynt is an equal opportunity employer, and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 319011BR Job Type Full Time Your role As a Senior Azure DevOps Engineer, you will work in one of the largest and most complex CRM applications buildings. You will work closely with Product Management, Software Development, Data Engineering and other teams to develop scalable and innovative CRM solutions. Your role will be accountable for design /implementation of technical solutions within WMA and timely delivery of projects following agile / scrum SDLC. Your team Are you an enthusiastic technology professional? Are you excited about seeking an enriching career, working for one of finest financial institutions in world? If so, you are the right person for this role. We are seeking technology and domain experts to join our Dynamics D365 CRM development team. We are responsible for WMA (Wealth Management Americas) clients facing technology applications. You’ll be working in the WMA CRM crew focusing on building applications which are used by financial advisors. Our team is dedicated to creating innovative solutions that drive our organization's success. We foster a collaborative and supportive environment, where you can grow and excel in your role. Your expertise configure and implement azure devops workflows using azure pipelines, azure repos and azure artifacts establish build, release and configuration management guidelines using industry best practices. strong practical delivery knowledge of public cloud services and resources (azure, aks preferred, helm, terraform) strong practical delivery and administration knowledge of the kubernetes platform and its eco-system - service mesh (istio preferred), network policy tool (calico preferred), cross-cluster management tools, kubernetes policies, etc. manage source code version control, maintain code repository, perform and administer database baseline, improve best practices of branching and code merge, establish process control points and configure tfs reporting suite to custom needs of development team automate release /deployment process and enforce established change management process implement ci/cd pipeline by managing quality and security policies/gates, designing a release strategy, setup up release management workflow, and implementing and appropriate deployment pattern using azure devops should have strong experience working with build and ci tools like maven/gradle/git. any experience in building ci and cd pipelines with gitlab would be an advantage should be experienced in developing cloud infrastructure using standard iac tools like terraform/ansible About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS Hiring!!!! Virtual Drive 31-May-25 - 10:00 AM-1:00 PM Role: DevOps Associates Exp 3 to 5 Years Location FOR the post Chennai Candidates who applied in last 3 months. Please refrain Please read Job description before Applying NOTE: If the skills/profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details: Name: Contact Number: Email ID: Highest Qualification in: (Eg. B.Tech/B.E./M.Tech/MCA/M.Sc./MS/BCA/B.Sc./Etc.) Current Organization Name: Total IT Experience: Current CTC Expected CTC Notice period: Whether worked with TCS - Y/N Location: Chennai DevOps Tools expertise: GitLab, GitHub, Jenkins, ArgoCD etc., Artifact Management using JFrog Application Security Testing Public Cloud: Google and AWS Cloud Developing CI/CD pipelines Good-to-Have Infrastructure automation using Terraform, Ansible, Helm etc. Public Cloud Kubernetes, Any certification on DevOps tools and platform in last 2 years. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for 4+ years of experienced highly skilled, solution focused Senior DevOps engineer who is passionate about increasing system development and business agility. This may involve software configuration management and deployments, automation, Infrastructure as Code (IAC) along with experience on diverse platforms like Windows Desktop, Linux, web, mobile etc. and supporting various delivery teams. We put strong emphasis on individual ownership and value people who take pride in working over the full lifecycle of a project. About the Role You will be part of the DevOps team responsible for installing, configuring, automating, and maintaining development and testing environment based on needs of Design & Development teams and the accredited Test Facilities. Responsibilities Strong experience in DevOps and CI/CD implementation. At least 3+ years of working experience with Docker, Kubernetes and Helm Charts with understanding of microservice design and architectural patterns. Proficient in Linux. At least 3+ years of working experience with infrastructure configuration management tool Ansible. Must have experience with Jenkins, management and extensions with other CI platforms and tools. Responsible for enabling teams through automation using Jenkins pipelines. At least 3+ years of working experience in deployment of PostgreSQL, Redis, ELK. Hands-on with Google Cloud Platform & AWS. Plan, Configure, Deploy and Operate a cloud solution. At least 3+ years of experience with automation of infrastructure and application deployment on GCP and AWS. Ensure Performance standards by configuring Auto-Scaling Solution to meet varying Load requirements. Configure access and security with experience on networking principles and protocols such as IP subnetting, routing, firewall rules, Virtual Private Cloud, Load Balancer, Cloud DNS, Cloud CDN, etc. Good understanding of cloud design considerations and limitations and its impact on Pricing. Deliver Proof of Concepts for new Solutions on Cloud. At least 3+ years of experience with Terraform & Packer. Prior experience of working with version control systems (GitHub). Interact with Development, Test and customer success teams to understand, develop and support product deployment strategy. Troubleshoot issues, isolating build/deployment issues due to code issues. Experience with deployment of .Net Core Applications. Qualifications 4+ years of experience in a related field. Required Skills Ansible Jenkins Bash Helm Kubernetes ELK RKE Postgresql HAProxy CI/CD Implementation DevOps Preferred Skills Experience with cloud platforms. Familiarity with microservices architecture. (Looking only Immediate Joiners or Serving Notice Period) Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 10 S&P Global The Role: Analyst, Platform Engineer The Team Will be joining the Lending Solutions Platform Management team which is responsible for build-out, configuration, deploy, scale, orchestration, and monitoring in supporting infrastructure and our release pipeline for our cloud hosting services for our Loan Franchise products. The Impact Our Hosted offerings are designed to provide a reduction in total cost of ownership to our software clients. Hosting WSO products and other software offerings on the AWS platform eliminates the need for our software clients to install, manage, upgrade and monitor our products on their own servers. What’s In It For You Will have the opportunity to work with new technologies as well as a leading Cloud platform (AWS). Be exposed to different cultures and ways of thinking as this a global team. Collaboration with team members from around the world. Responsibilities: Responsibilities for the Platform Management team include but are not limited to: Partner with Development and Architecture teams to lead and define standards in IaC for Software Solutions. Automate and streamlining of our infrastructure deployment. Must have excellent knowledge of “infrastructure as code”. Tools and automation-based monitoring of production environments to provide proactive alerts on system health and reliability. Good background in Linux and Windows administration. Troubleshooting and resolving root cause issues in our dev, test and production internal and cloud environments. Key Qualifications 5+ years of experience building enterprise DevOps pipelines in a commercial environment (still hands on). Experience with using Git. Experienced in at least one scripting language such as Python, Bash, JavaScript, PowerShell, Node.js, etc. Experienced in Infrastructure as Code tools such as Terraform, CloudFormation or similar. Experienced in Configuration Management tools such as Ansible, Chef, Puppet or similar. Deep knowledge of AWS Services including: EKS/HELM/Flux/Fargate S3 Lambda VPC Route 53 RDS EC2, Load Balancing CloudWatch CloudFront Soft Skills Displays energy, drive, and stamina. Open minded, flexible, and willing to adapt to changing situations. Comfortable working with global teams operating across different time zones. Must be an excellent communicator both written and verbally. Ability to collaborate effectively with overseas team. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314274 Posted On: 2025-04-13 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Systems, a niche software development company, collaborates with some of the best-funded startups and established businesses worldwide. We are seeking an experienced Lead Software Engineer with minimum 7 years of experience in designing systems and backend services. In this role, you will evaluate new requirements, develop architecture models, and integrate critical business requirements like high availability, redundancy, and disaster recovery into infrastructure designs. The role requires working with a globally distributed team across US, Europe and India. Please visit our website at to know more about us. The email address to apply is Key Responsibilities Design, develop, and implement new products and interfaces. Build scalable backend in Java & modern technologies in a micro-services architecture. Leverage modern platform technologies like Docker, Kubernetes, and related tools. Create technical product documentation and propose product improvements. Collaborate with Product Owners to break down Epics and Stories into actionable tasks. Guide teams in adopting best practices and ensuring high-quality code standards. Perform maintenance tasks, including resolving software deficiencies on existing products. Report key activities and progress to Engineering Managers and Product Owners. Skills Required BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering, is highly preferable Strong experience with Java and its frameworks ( Java 8 or above). Deep understanding of Spring, Spring Boot , and Hibernate (JPA). Hands-on experience with highly scalable Micro-services in clustered/multi-node environments. Proficiency with orchestration platforms and tools like Kubernetes and Docker . Experience with NoSQL/SQL databases (e.g., PostgreSQL , MariaDB, MongoDB ). Knowledge of networking protocols such as TCP/IP, HTTP, and SSL. Expertise in Linux environments and security best practices. Desired Skills: Experience with Kafka for event-driven architectures. Familiarity with Linux shell scripting and Python . Hands-on experience with deployment tooling like Ansible or Helm . Strong ability to structure and present technical information to stakeholders. Fluency in English (both verbal and written). Additional language skills are a plus. Position Benefits Top notch remuneration and excellent growth opportunities An excellent, no-nonsense work environment with the very best people to work with Highly challenging software implementation problems Hybrid Mode. We offered complete work from home even before the pandemic.
Posted 3 weeks ago
0.0 - 3.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Key Responsibilities: Deliver engaging and interactive training sessions (24 hours total) based on structured modules. Teach integration of monitoring, logging, and observability tools with machine learning. Guide learners in real-time anomaly detection, incident management, root cause analysis, and predictive scaling. Support learners in deploying tools like Prometheus, Grafana, OpenTelemetry, Neo4j, Falco, and KEDA. Conduct hands-on labs using LangChain, Ollama, Prophet, and other AI/ML frameworks. Help participants set up smart workflows for alert classification and routing using open-source stacks. Prepare learners to handle security, threat detection, and runtime anomaly classification using LLMs. Provide post-training support and mentorship when necessary.
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
Description Raft, an intelligent logistics platform, is revolutionizing the freight and customs industry through automation and advanced technologies. As a fast-scaling, UK-based tech company with global reach, we're pioneering solutions that empower freight forwarders and customs brokers to operate at new levels of efficiency and precision. Fueled by our Series B funding from renowned investors, we're poised for major growth and innovation. As a Senior Engineer with a focus on all things AI at Raft, you'll be instrumental in shaping the architecture and capabilities of our platform to support features at the cutting-edge powered by AI. This is not a traditional engineering role - it's a high-impact opportunity to work with the cutting edge of AI and agents in a real product setting. You will be responsible for designing scalable and innovative AI solutions and making them work at an enterprise scale. This role is also unique since you will get exposure to our current platform and customers alongside being involved in an exciting greenfield project, where you will be able to build an AI native product from scratch. In addition to building advanced software, you'll play a strategic role in driving technical decision-making and mentoring our growing engineering team. This role is for someone who thrives in a fast-paced, ambitious environment and is ready to make an outsized impact on a product used across the globe. What You'll Do: Design and implement AI-powered features using LLMs, MCP and other advanced technologies Create robust, scalable, and maintainable code that adheres to engineering best practices Develop agentic AI systems that can autonomously perform complex tasks and bring humans in the loop at the right time. This will involve thinking about and building systems that balance automation with control. Integrate LLM and AI models into the Raft platform to power new, innovative features at the cutting edge of enterprise-grade AI. Work with our existing tech stack and make improvements across our existing models, code and architecture. Drive the evolution of platform features that require complex engineering solutions powered by AI/ML. Be an evangelist for modern AI and the art of the possible within our teams. Implement rigorous testing methodologies for AI systems, including modern evals. Collaborate with product managers, UX designers, and customers to understand pain points and translate them into effective technical solutions Mentor junior engineers and foster a culture of innovation and continuous learning Stay current with the rapidly evolving AI landscape and recommend strategic technology adoption Requirements Brings 5+ years of hands-on experience in software development with a strong focus on Python, supplemented by experience in other programming languages. Proven experience designing and implementing solutions with LLMs like GPT-4, Claude, or open-source models Experience with prompt engineering and LLM fine-tuning techniques Experience building production-ready AI systems that scale reliably in enterprise environments Has deep expertise in designing and maintaining databases, vector stores, etc. and understands the latest trends in database technology, particularly relevant to LLM and AI applications. Is proficient with FastAPI/Starlette and can demonstrate experience in building scalable APIs with Python for AI/ML applications. Has a solid track record in cloud native environments and understands how to architect and implement software libraries that thrive in distributed, multi-cloud settings. Can design and implement a sophisticated logging, monitoring, and alerting infrastructure to ensure high availability and quick troubleshooting of AI/ML systems. Understands and implements best practices in security and data privacy, with a proven ability to secure complex data flows, particularly for LLM/AI applications. Has extensive experience with containerised tools like Docker, Docker Compose, Kubernetes, Helm, and understands the intricacies of deploying these in production, specifically for LLM/AI workloads. Some experience with agentic AI architectures and multi-agent systems is beneficial. Demonstrated ability to balance technical excellence with business requirements and time constraints Apply Because You Want to... Join a company on the leading edge of logistics technology, competing with industry giants while leveraging cutting-edge AI/ML and backend engineering. Work in a product-driven environment where your contributions shape real-world solutions for a global customer base. Collaborate with stakeholders across industries and continents, gaining unparalleled exposure to the logistics and automation sectors. Thrive in a high-energy, growth-focused environment that pushes you to expand your technical and strategic skill sets. Be part of a diverse, inclusive, and multi-cultural team where innovation and continuous improvement are celebrated. Show more Show less
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: Founded in the year 2017, CoffeeBeans specializes in offering high end consulting services in technology, product, and processes. We help our clients attain significant improvement in quality of delivery through impactful product launches, process simplification, and help build competencies that drive business outcomes across industries. The company uses new-age technologies to help its clients build superior products and realize better customer value. We also offer data-driven solutions and AI-based products for businesses operating in a wide range of product categories and service domains. Experience: 4 - 6 Years Work Location : Hyderabad ( Hybrid ) Responsibilities : Would be working with team for building distributed systems and payment applications at population scale. Responsible for building automations and handling end to end deployments. Responsible for adopting best practices across the team for CI/CD. Responsible for standardizing and optimizing the processes. Responsible for building monitoring’s and observability. Responsible for managing the underlying infrastructure and software stacks. Responsible for adopting cloud native application strategy. Skills Experience in handling production systems Experience in managing on prem setups and BareMetal servers. Experience in managing Kubernetes clusters (Storage Orchestrators, CNIs, Ingress and other resources) Experience in managing storage orchestrators such as Ceph, Rook Ceph, Charmed Ceph, Longhorn etc. Experience in managing and optimising resources in Kubernetes clusters. Experience in deploying and managing observability stack which includes Elastic Search, Kibana, Logstash, FluentBit, Prometheus, Grafana etc. Experience in managing software load balancers such as Nginx or HAProxy. Experience in writing shell-based automations. Experience in writing helm-based deployments. Familiarity on writing terraform scripts, ansible, ArgoCD etc. Familiarity on writing CI/CD pipelines and creating deployment automations. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Roles & Responsibilities GitHub Actions & CI/CD Workflows (Primary Focus) Design, develop, and maintain scalable CI/CD pipelines using GitHub Actions. Create reusable and modular workflow templates using composite actions and reusable workflows. Manage and optimize GitHub self-hosted runners, including autoscaling and hardening. Monitor and enhance CI/CD performance with caching, parallelism, and proper dependency management. Review and analyze existing Azure DevOps pipeline templates. Migrate Azure DevOps YAML pipelines to GitHub Actions, adapting tasks to equivalent GitHub workflows. Azure Kubernetes Service (AKS) Deploy and manage containerized workloads on AKS. Implement cluster and pod-level autoscaling, ensuring performance and cost-efficiency. Ensure high availability, security, and networking configurations for AKS clusters. Automate infrastructure provisioning using Terraform or other IaC tools. Azure DevOps Design and build scalable YAML-based Azure DevOps pipelines. Maintain and support Azure Pipelines for legacy or hybrid CI/CD environments. ArgoCD & GitOps Implement and manage GitOps workflows using ArgoCD. Configure and manage ArgoCD applications to sync AKS deployments from Git repositories. Enforce secure, auditable, and automated deployment strategies via GitOps. Collaboration & Best Practices Collaborate with developers and platform engineers to integrate DevOps best practices across teams. Document workflow standards, pipeline configurations, infrastructure setup, and runbooks. Promote observability, automation, and DevSecOps principles throughout the lifecycle. Must-Have Skills 8+ years of overall IT experience, with at least 5+ years in DevOps roles. 3+ years hands-on experience with GitHub Actions (including reusable workflows, composite actions, and self-hosted runners). 2+ years of experience with AKS, including autoscaling, networking, and security. Strong proficiency in CI/CD pipeline design and automation. Experience with ArgoCD and GitOps workflows. Hands-on with Terraform, ARM, or Bicep for IaC. Working knowledge of Azure DevOps pipelines and YAML configurations. Proficient in Docker, Bash, and at least one scripting language (Python preferred). Experience in managing secure and auditable deployments in enterprise environments. Good-to-Have Skills Exposure to monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Service Meshes like Istio or Linkerd. Experience with Secrets Management (e.g., HashiCorp Vault, Azure Key Vault). Understanding of RBAC, OIDC, and SSO integrations in Kubernetes environments. Knowledge of Helm and custom chart development. Certifications in Azure, Kubernetes, or DevOps practices. Skills Github Actions & CI/CD,Azure Kubernetes Service,AgroCD & GitOps,Devops Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About TwoSD (2SD Technologies Limited) TwoSD is the innovation engine of 2SD Technologies Limited , a global leader in product engineering, platform development, and advanced IT solutions. Backed by two decades of leadership in technology, our team brings together strategy, design, and data to craft transformative solutions for global clients. Our culture is built around cultivating talent, curiosity, and collaboration. Whether you're a career technologist, a self-taught coder, or a domain expert with a passion for real-world impact, TwoSD is where your journey accelerates. Join us and thrive. At 2SD Technologies, we push past the expected—with insight, integrity, and a passion for making things better. Role Overview We are seeking a DevOps / Cloud Engineer with strong experience in AWS preffered (Azure / GCP) to build, deploy, and optimize cloud-native applications and infrastructure. This is a full-time position based in Gurugram, India , focused on accelerating deployment pipelines, improving reliability, and implementing security and cost-efficient best practices. Key Responsibilities Provision, monitor, and maintain cloud infrastructure (primarily AWS) using IaC (Terraform, CloudFormation) Design and manage scalable CI/CD pipelines using GitHub Actions, Jenkins, or similar tools Automate deployment, scaling, and monitoring of containerized applications (ECS, EKS) Implement logging, observability, and alerting tools for all environments Collaborate with developers, architects, and security teams to streamline DevSecOps workflows Perform regular security reviews, patching, and hardening of cloud and container infrastructure Required Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent experience 3+ years of DevOps or Cloud Engineering experience Hands-on with Docker, Kubernetes, Helm, and Infrastructure as Code Proficient in AWS services like EC2, S3, Lambda, RDS, VPC, IAM, CloudWatch Experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) Strong scripting skills (Python, Bash, or Shell) Preferred Qualifications AWS Certifications (e.g., Solutions Architect Associate, DevOps Engineer) Experience with multi-cloud or hybrid cloud setups Familiarity with GitOps workflows using ArgoCD or Flux Experience with security tools like HashiCorp Vault, AWS Secrets Manager Exposure to cost optimization tools and FinOps best practices Core Competencies Cloud Infrastructure Design & Monitoring Automation & Infrastructure as Code Continuous Integration / Delivery (CI/CD) DevSecOps & Compliance Practices Problem Solving & Debugging Under Pressure Tools & Platforms Cloud: AWS (ECS, EKS, Lambda, CloudFormation) IaC: Terraform, AWS CDK Containers: Docker, Kubernetes, Helm CI/CD: GitHub Actions, Jenkins, GitLab CI Monitoring: Prometheus, Grafana, ELK Stack, CloudWatch Security: AWS IAM, Secrets Manager, Vault Scripting: Bash, Python, Shell Version Control & PM: Git, Jira, Notion, Slack Why Join TwoSD? At TwoSD , innovation isn’t a department—it’s a mindset. Here, your voice matters, your expertise is valued, and your growth is supported by a collaborative culture that blends mentorship with autonomy. With access to cutting-edge tools, meaningful projects, and a global knowledge network, you’ll do work that counts—and evolve with every challenge. DevOps / Cloud Engineer Location: Gurugram, India (Onsite/Hybrid) Company: TwoSD (2SD Technologies Limited) Industry: Cloud Engineering / DevOps Employment Type: Permanent Date Posted: 26 May 2025 How to Apply To apply, send your updated resume and relevant links (portfolio/GitHub) to hr@2sdtechnologies.com or visit our LinkedIn careers page. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Backend Developer Python Desktop Python + Qt-Framework Git, Kubernetes, Docker, Helm Clean code Principles Agile Development Testing frameworks and creation of test reports Show more Show less
Posted 3 weeks ago
13.0 - 18.0 years
45 - 50 Lacs
Karnataka, Tamil Nadu
Work from Office
Expertise in Identity MFA Concepts and hands on in any MFA product. Expertise in leading , mentoring a team to deliver on time. Expertise in Java, Spring, Spring boot, Microservices and cloud Experienced in Helm / Kubernetes / Docker / Containerisation Experience in working on an Agile/Scrum environment Experience in Issue analysis, trouble shooting and providing solutions Experience in OpenShift Platform Good communicator & Team Player Self-organised and ability to figure out integrations of components, ability to innovate for the best practice Providing regular updates to leads Oracle DB experience (SQL, Stored procedure, Capacity planning, etc) Experience in any MFA product
Posted 3 weeks ago
7.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Experience : 7 to 12 Years As part of your responsibilities, you will have to work on the following: - Evaluate sizing and infrastructure requirements for new use cases. - Setup self-service deployment pipelines for AI applications. - Ensure reproducibility of deployments in both environment Non-Prod and Production. - Make sure that all applications are properly monitored, and alerting is in place. - Evolve in an environment where innovation and lean processes are praised, straight-forward communication is encouraged, and peers understand the meaning of team up. - Work with a team of colleagues who are ready to collaborate and to share their experience. Mandatory: · Knowledge of Python ecosystem. · Experience with http rest APIs with focus on Django · Experience with Git (version control system). E.g. Gitlab, Gitlab CI · Experience in DevOps /OPS · Linux operating system experience · Experience in containerization (docker, podman) · LLM operations · Cloud experience (e.g. IBM Cloud / Azure) Preferable: · Kubernetes/Helm · Familiar with code quality gating (Sonar, Nexus, Fortify) · Ansible · Domino Datalab, Jupyter · Artifactory · Kafka · SQL Postgres · Terraform · Dynatrace Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Azure DevOps Engineer with Parnasoft Technologies Pvt Ltd (deployed at AVEVA Product Based Company ) Please read the JD . Job Title: Azure DevOps Engineer Location: Hyderabad, India (Hybrid – 3 days onsite at WaveRock SEZ) Employment Type: Full-time, Permanent Company: Parnasoft Technologies Pvt. Ltd. (Deployed at AVEVA) Client: AVEVA (www.aveva.com) – Global leader in industrial software About the Role:- We are hiring a DevOps Engineer to work as a full-time, permanent employee of Parnasoft Technologies , assigned to our client AVEVA , at their Hyderabad office in WaveRock SEZ . In this role, you will be part of AVEVA’s global DevOps initiatives, helping build and manage scalable, secure, and reliable infrastructure on Microsoft Azure . You will play a key role in implementing CI/CD pipelines, automating infrastructure, managing Kubernetes clusters, and contributing to system reliability and cost optimization strategies in a hybrid work environment. Key Responsibilities Design, deploy, and manage scalable infrastructure on Microsoft Azure Build and maintain Kubernetes clusters using AKS Automate provisioning, configuration, and deployments Develop and manage CI/CD pipelines Monitor system health and optimize performance Collaborate with DevOps, development, and security teams Maintain documentation and enforce configuration standards Support Azure-based deployments and assist with ongoing projects Stay current with industry trends and tools Stay current with evolving technologies and DevOps practices Required Skills and Experience Bachelor’s Degree in Computer Science, Engineering, or a related field. Minimum 6 years of experience in DevOps, cloud operations, or infrastructure roless or similar role Experience with: CI/CD tools (e.g., Azure DevOps, Jenkins) Scripting (PowerShell, Bash, Python) Azure CLI, ARM templates, Bicep Docker, Kubernetes, Kubectl, Helm Azure services like FrontDoor, KeyVaults, Storage, Databases, DFC, Policies Strong understanding of: Networking and security best practices in the cloud Cloud cost optimization Excellent communication and problem-solving skills Ability to work independently and in teams Preferred Qualifications Microsoft Azure certifications (Administrator, DevOps Engineer, etc.) Familiarity with Agile or SAFe methodologies Experience with ISO27001 or other security frameworks ✅ Must-Have Technical Skills Microsoft Azure – Strong hands-on experience with cloud infrastructure and services CI/CD Pipelines – Using Azure DevOps, Jenkins or similar tools Scripting – PowerShell, Bash, or Python Infrastructure as Code – ARM, Bicep, Azure CLI Containers & Kubernetes – Docker, AKS (Azure Kubernetes Service), Kubectl, Helm Azure Services – Azure FrontDoor, KeyVaults, Storage Accounts, Databases, DFC (Data Factory), Policies Cloud Basics – Cloud Networking, Security, and Cost Optimization in Azure, Understanding of best practices ✅ Must-Have Soft Skills Problem-solving Teamwork Clear communication ✅ Nice-to-Have Skills Azure Certifications – (e.g., Azure DevOps Engineer Expert, Azure Administrator) Agile/Scrum or SAFe – Experience working in Agile development environments ISO27001 or Security Compliance Knowledge – Familiarity with IT security frameworks Why Join Parnasoft? Full-time permanent role with Parnasoft Long-term stability with no bench time or project continuity concerns Exposure to global projects with AVEVA Clear path for growth in cloud and DevOps domains Work in a modern tech environment with Azure, Kubernetes, and DevOps tools Collaborative and innovation-driven work environment Hybrid work with on-site engagement at a world-class campus (WaveRock SEZ, Hyderabad) About Parnasoft Founded in 2021, Parnasoft is headquartered in Visakhapatnam, Andhra Pradesh, and led by an IIT graduate. As a trusted technology partner, we specialize in software development, GIS, and automation solutions, emphasizing quality, innovation, and client success to empower businesses in the digital era. Visit: www.parnasoft.com. Parnasoft is proud to be an R&D partner to AVEVA. About AVEVA AVEVA is a global leader in industrial software, driving digital transformation across industries. Joining this role at AVEVA’s Hyderabad office (WaveRock SEZ) lets you contribute to their global mission. Explore more at www.aveva.com. As a product-based company , AVEVA ensures project continuity with no bench concerns. This long-term role with Parnasoft, working at AVEVA, offers stability and a structured career path in cloud and DevOps Application Process: To apply, please send your CV along with the following details to shivkumarsb@parnasoft.tech : Full Name: Current Location: Total IT Experience (Years): Azure Cloud Experience (Specify Experience): CI/CD Tools Experience (Yes/No, specify): Scripting Languages Known (PowerShell / Bash / Python) (Years): Experience with Kubernetes / AKS (Yes/No) (Years): Experience with Azure Services (FrontDoor, KeyVaults, Storage, DFC, etc.): Current Salary: Expected Salary: Notice Period: Willingness to work at AVEVA’s Hyderabad office (WaveRock SEZ) (Yes/No): Availability for Hybrid work (3 days/week): Reason for Job Change: 📩 Apply now! Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Ready to revolutionize the future of software development? At IgniteTech, we're not just keeping pace with the AI revolution - we're leading it. In an industry where 30% of projects still miss their deadlines due to traditional development bottlenecks, we're crafting a new paradigm that merges cutting-edge GenAI with software engineering excellence. This isn't your typical software engineering role. As our AI Software Engineer, you'll be pioneering the integration of artificial intelligence into the very fabric of software development. Imagine being at the helm of innovations that don't just incrementally improve but fundamentally transform how software is conceived, architected, and delivered. If you're someone who gets excited about emerging technologies, thrives on pushing boundaries, and wants to be part of a team that's redefining industry standards, we want to talk to you. This role goes beyond conventional coding - it's about shaping the future of software development through the lens of artificial intelligence. What You Will Be Doing Your mission will be twofold: Architect the Future: Create groundbreaking AI-powered systems that revolutionize software development automation, from intelligent architecture generation to sophisticated predictive coding frameworks. Stay at the Cutting Edge: Immerse yourself in the AI technology landscape through intensive R&D, knowledge sharing at prestigious conferences, and active participation in both academic and professional technology communities. What You Won’t Be Doing To be clear, this role transcends traditional development work: You won't be bogged down by routine maintenance tasks or basic debugging assignments This isn't about conventional software development or implementing standard features - we're focused on AI-driven innovation and transformation AI Software Engineer Key Responsibilities Your core mission is to leverage your engineering prowess to: Drive transformative improvements in development velocity Minimize human error through AI-powered solutions Elevate code quality standards Accelerate product time-to-market Enhance overall customer satisfaction through superior software delivery Basic Requirements To succeed in this role, you'll need: A proven track record of 3+ years driving impactful engineering initiatives Hands-on experience with modern AI coding assistants (Github Copilot, Cursor.sh, v0.dev) Demonstrated success in implementing Generative AI solutions Practical experience working with various LLM platforms (GPT, Claude, Mistral) to address real-world business challenges About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5269-IN-Pune-AISoftwareEngi.004 Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Helm is a popular package manager for Kubernetes that simplifies the deployment and management of applications. In India, the demand for professionals with expertise in Helm is on the rise as more companies adopt Kubernetes for their container orchestration needs.
The average salary range for helm professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can command salaries upwards of INR 15 lakhs per annum.
Typically, a career in Helm progresses as follows: - Junior Helm Engineer - Helm Engineer - Senior Helm Engineer - Helm Architect - Helm Specialist - Helm Consultant
In addition to proficiency in Helm, professionals in this field are often expected to have knowledge of: - Kubernetes - Docker - Containerization - DevOps practices - Infrastructure as Code (IaC)
As the demand for Helm professionals continues to grow in India, it is important for job seekers to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a valuable asset to organizations looking to leverage Helm for their Kubernetes deployments. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.