Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Staff Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. This role is a critical bridge between teams. It requires excellent organization and communication as the coordinator of work across multiple engineers and projects. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Organize work from multiple Data Platform teams and customers with other Data Engineers Communicate status, progress and blockers of active projects to Data Platform leaders Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 10+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Excellent communication skills Demonstrated ability to internalize business needs and drive execution from a small team Excellent organization of work tasks and status of new and in flight tasks including impact analysis of new work Strong understanding of python Good understanding of Java Strong understanding of SQL and data modeling Familiarity with airflow Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges Learn the skills and abilities of your teammates and align expertise with available work By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Increasing team velocity and showing contribution to improving maturation and delivery of Data Platform vision. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. Engaging with team members. Providing them with challenging work and building cross skill expertise Planning project support and execution with peers and Data Platform leaders SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. Built on a foundation of AI and ML, our Identity Security Cloud Platform delivers the right level of access to the right identities and resources at the right time—matching the scale, velocity, and changing needs of today’s cloud-oriented, modern enterprise. About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 5+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Must be willing to work 4 overlapping hours with US timezone. will work closely with US based managers and engineers Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan / Alation ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges and propose software architecture designs to solve them elegantly by abstracting useful common patterns. By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to prototyping and co-implementing systems alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. SailPoint is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. Built on a foundation of AI and ML, our Identity Security Cloud Platform delivers the right level of access to the right identities and resources at the right time—matching the scale, velocity, and changing needs of today’s cloud-oriented, modern enterprise. About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 5+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Must be willing to work 4 overlapping hours with US timezone. will work closely with US based managers and engineers Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan / Alation ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges and propose software architecture designs to solve them elegantly by abstracting useful common patterns. By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to prototyping and co-implementing systems alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. SailPoint is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 10+ Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Banyan Software provides the best permanent home for successful enterprise software companies, their employees, and customers. We are on a mission to acquire, build and grow great enterprise software businesses all over the world that have dominant positions in niche vertical markets. In recent years, Banyan was named the #1 fastest-growing private software company in the US on the Inc. 5000 and amongst the top 10 fastest-growing companies by the Deloitte Technology Fast 500. Founded in 2016 with a permanent capital base setup to preserve the legacy of founders, Banyan focuses on a buy and hold for life strategy for growing software companies that serve specialized vertical markets. About SmartDocuments Are you ready for the next step in your career as a Senior Software Developer? At SmartDocuments, you will work in a multidisciplinary team on innovative software solutions. With room for initiative, the latest technologies, and an Agile work environment, you will actively contribute to the development of our products. Your Role as a Lead Dev Ops Engineer We're looking for an experienced DevOps Engineer to help build, automate, and maintain both our SaaS cloud infrastructure and on-premise client installations. You'll work closely with development teams to implement robust CI/CD pipelines, manage Kubernetes deployments, and ensure security across our microservices architecture in multiple environments, with a focus on search, AI, and vector database technologies. What Will You Do? Design and implement automated CI/CD pipelines for both cloud and on-premise environments. Develop and maintain automated installation and update processes for on-premise client deployments. Manage SaaS infrastructure and maintain Kubernetes clusters across multiple environments. Deploy and support Elasticsearch clusters and vector database solutions. Set up and manage Large Language Model (LLM) deployments using Azure AI services. Implement and maintain authentication systems across microservices and platforms. Provide support for PostgreSQL database infrastructure, including performance tuning and backup management. Monitor system health and performance across distributed systems using observability tools. Apply security best practices across all deployment models—cloud, hybrid, and on-premise. Collaborate with development teams to streamline and optimize deployment workflows and automation strategies. Must-haves Version Control & CI/CD: Proficient in using Bitbucket for source control and managing CI/CD pipelines. Identity & Access Management: Hands-on experience with Keycloak and Azure SSO for secure authentication and user management. Kubernetes: In-depth knowledge of Kubernetes, including implementing SSO in containerized environments. Database Management: Skilled in PostgreSQL administration, including performance tuning and optimization. Search & Analytics: Experienced in configuring, optimizing, and scaling Elasticsearch for high-performance search and analytics workloads. Vector Databases: Practical exposure to vector database technologies, supporting AI/ML-driven applications. Azure AI & LLMs: Familiar with deploying and managing Large Language Models (LLMs) using Azure AI, particularly through the Azure OpenAI Service. System Architecture: Expertise in implementing microservices and MACH architecture (Microservices, API-first, Cloud-native, Headless). Code Quality & Governance: Proficient in integrating SonarQube for continuous code quality analysis and enforcement across the development lifecycle. Nice To Have Deployment Automation: Experience in creating reproducible, automated installation processes for on-premise environments. Infrastructure as Code (IaC): Proficient with Terraform, Ansible, or similar tools for automating deployments in both cloud and on-premise setups. Cloud Platforms: Strong hands-on experience with AWS, Azure, and Google Cloud Platform (GCP). CI/CD Pipelines: Skilled in using Jenkins, GitHub Actions, and Azure DevOps for continuous integration and delivery. Monitoring & Observability: Experience with Prometheus, Grafana, and the ELK stack in managing distributed systems. Security & DevSecOps: Knowledge of integrating vulnerability scanning, secret management (e.g., HashiCorp Vault), and DevSecOps best practices into the delivery pipeline. Containerization: Proficient with Docker and managing container registries. Scripting & Automation: Strong scripting skills in Bash, Python, and PowerShell for automating tasks and processes. Network Management: Experience working with ingress controllers and service mesh technologies such as Istio or Linkerd. Configuration Management: Hands-on with Helm charts and Kustomize for Kubernetes resource configuration. Qualifications 5+ years of DevOps/SRE experience in both cloud and on-premise environments Demonstrated experience with microservices architecture Experience with Elasticsearch and modern AI infrastructure components Familiarity with vector databases (such as Pinecone, Milvus, or Weaviate) Experience deploying LLMs on Azure AI or similar platforms Experience automating complex installation processes Strong problem-solving abilities and communication skillsRelevant certifications (e.g., CKA, AWS/Azure certifications) a plus Diversity, Equity, Inclusion & Equal Employment Opportunity at Banyan: Banyan affirms that inequality is detrimental to our Global Teams, associates, our Operating Companies, and the communities we serve. As a collective, our goal is to impact lasting change through our actions. Together, we unite for equality and equity. Banyan is committed to equal employment opportunities regardless of any protected characteristic, including race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, or protected veteran status and will not discriminate against anyone on the basis of a disability. We support an inclusive workplace where associates excel based on personal merit, qualifications, experience, ability, and job performance. Show more Show less
Posted 2 days ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Dear Candidate , Greetings from Peoplefy Info solutions ! We are recruiting for Senior Devops engineer role for one of our client in Pune location. Role - Senior Devops Engineer Experience - 9 years - 14 years Location - Pune ( 5 days office) We prefer candidates who are currently working in Product based company About the Client (Product company) - Our client is a rapidly growing, mission-driven technology company focused on transforming the public sector through cutting-edge cloud solutions. They specialize in modernizing critical government operations, including budgeting and planning, procurement, asset and financial management, permitting, and citizen engagement. With a strong emphasis on transparency, efficiency, and data-driven decision-making, the company supports thousands of public agencies in delivering smarter, more responsive services to their communities. Backed by top-tier investors and recognized for their innovative approach, they are a leader in the government technology space. This is a unique opportunity to join a high-impact organization that is reshaping how government works in the digital age. Job Description :- As a Sr. DevOps Engineer at the company, you'll build best-in-class multi-tenant SaaS solutions that enable efficiency, transparency, and accountability. You'll be a key member of our engineering team, writing software, delivering new cloud infrastructure, and automating CI/CD processes in a fast-paced, agile environment using modern technologies, including GitHub Actions, Terraform, Kubernetes, AWS, Cloudflare, and Grafana. In this role, you'll have the opportunity to collaborate closely with engineering leadership and application engineers. Your strong collaboration skills and ability to execute quickly will be key to your success. A typical day would involve optimizing deployment processes, building, upgrading, and re-architecting infrastructure, fine-tuning resource utilization, and ensuring monitoring and alerting are configured for all aspects of the application. At the company, we value natural self-starters who can effectively communicate ideas and contribute to our respect, dedication, and fun culture. If you have a passion for good deployment design and solid cloud architecture and love clean code, principles over dogma, and making the world a little better every day, you'll find a perfect alignment with our values. Responsibilities: Architect, deploy, and maintain a highly available and scalable multi-tenant SaaS environment in AWS. Implement and manage infrastructure as code using tools such as Terraform. Optimize services for cost-efficiency, performance, and reliability. Ensure high reliability and disaster recovery readiness across all services. Enforce security policies, procedures, and standards to protect sensitive data. Implement best practices for securing resources, including VPC configurations, IAM roles, security groups, and network ACLs. Work with compliance teams to ensure adherence to regulations and industry standards such as SOC2 and StateRamp. Design, build, and maintain robust CI/CD pipelines using tools like GitHub Actions and CircleCI. Facilitate seamless integration and continuous deployment processes to ensure rapid and reliable delivery of new features and patches. Set up and manage comprehensive monitoring and alerting systems using tools like CloudWatch, Prometheus, or Grafana. Develop and implement incident response protocols; lead incident investigations and post-mortems. Provide mentorship and training to junior engineers, fostering a culture of continuous learning and improvement. Collaborate with cross-functional teams to define technical requirements and deliver high-quality solutions. Requirements and Preferred Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field from a premier institution. 10+ years of experience in software engineering with a deep focus on cloud operations, security, and DevOps. 4+ years of running Kubernetes at scale and in production on public clouds Proficiency in at least one modern programming language (e.g., Python, Java, Go, or Ruby). Proven track record in designing and managing AWS infrastructure for SaaS applications. Extensive experience with infrastructure as code tools such as Terraform. Strong knowledge of AWS services, including EC2, RDS, S3, VPC, IAM, and Lambda, among others. Experience working with Redis, AWS ( S3, CloudFront), Cloudflare, Kubernetes, kustomize, and Docker. Experience with PostgreSQL and/or MS SQL Server Expertise in CI/CD tools like GitHub Actions, CircleCI, Jenkins or equivalent. Advanced skills in monitoring and logging tools such as CloudWatch, Prometheus, or Grafana. In-depth understanding of cloud security practices, including encryption, identity and access management (IAM), and network security. Strong problem-solving skills with a proactive approach to identifying issues and delivering solutions. Excellent communication skills with the ability to collaborate effectively across various teams. If you have any queries please connect to bhuvaneshwaran.se@peoplefy.com mail id Show more Show less
Posted 4 days ago
2.5 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Edits new and existing applications. Implements, testing and debugging defined software components. Documents all development activity. Works with moderate guidance in own area of knowledge. Job Description Position: DevOps Engineer 2 Experience: 2.5 years to 4.5 years Job Location: Chennai Tami Nadu Technical Skills Must Have : Terraform, Docker and Kubernetes, CICD, AWS, Bash, Python, Linux/Unix, Git, DBMS (e.g. MySQL), NoSQL (e.g. MongoDB) Good to have: Ansible, Helm, Prometheus, ELK stack, R, GCP/Azure Key Responsibilities Design, build, and maintain efficient, reusable, and reliable code Work with analysis, operations, and test teams to achieve the best possible outcome within time and budget Troubleshoot infrastructure issues Attend cloud engineering meetings Participate in code reviews and quality assurance activities Participate in estimation discussions with the product team Continuously improve knowledge and coding skills Qualifications & Requirements Bachelor’s degree in computer science, Engineering, or a related field. experience in a scripting language (e.g. Bash, Python) 3+ years of hands-on experience with Docker and Kubernetes 3+ years of hands-on experience with CI tools (e.g. Jenkins, GitLab CI, GitHub Actions, Concourse CI, ...) 2+ years of hands-on experience with CD tools (e.g. ArgoCD, Helm, kustomize) 2+ years of hands-on experience with LINUX/UNIX systems 2+ years of hands-on experience with cloud providers (e.g. AWS, GCP, Azure) 2+ years of hands-on experience with one IAC framework (e.g. Terraform, Pulumi, Ansible) Basic knowledge of virtualization technologies (e.g. VMware) is a plus Basic knowledge of one database (MySQL, SQL Server, Couchbase, MongoDB, Redis, ...) is a plus Basic knowledge of GIT and one Git Provider (e.g. GitLab, GitHub) Basic knowledge of networking Experience writing technical documentation. Good Communication & Time Management Skills. Able to work independently and as part of a team. Analytical thinking & Problem-Solving Attitude. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years Show more Show less
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position- DevOps Engineer Years of Experience- 2-4 Years Location - Onsite, Indore Core Responsibilities Infrastructure Management: Design and manage scalable AWS infrastructure for microservices. CI/CD & Deployment: Build and maintain CI/CD pipelines using GitHub Actions and ArgoCD. Container Orchestration: Manage Docker containers with Kubernetes, EKS, ECS, and Fargate. Automation: Use Terraform and Ansible to automate infrastructure setup and configuration. Security & Configuration: Manage secrets and app settings securely using AWS Secrets Manager. Release Management: Handle deployments, rollbacks, and releases using Helm and Kustomize. Troubleshooting: Fix infrastructure, deployment, and performance issues across all environments. Documentation: Maintain clear documentation for systems, processes, and operations. Required Technical Expertise Cloud: AWS (EC2, S3, VPC, ELB, Route 53, Lambda, IAM, CloudFront, etc.) Containers: Docker, Kubernetes, ECS, EKS, Fargate Microservices: Experience with scaling, routing, monitoring, and service discovery IaC & Config Management: Terraform, CloudFormation, Ansible CI/CD Tools: GitHub Actions, Jenkins, GitLab CI, ArgoCD Deployment Tools: Helm, Kustomize Monitoring: Prometheus, Grafana, CloudWatch Scripting: Bash, Python Version Control: Git, GitLab Operating Systems: Linux (Ubuntu/CentOS), Windows Server Databases & Messaging: MongoDB, PostgreSQL, MySQL, RabbitMQ, Redis Code Quality & Security: SonarQube Show more Show less
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II Overview The Cloud Platforms team is responsible for the day to day maintenance and operation of various cloud platforms and their supporting services. Cloud Platform Engineers are engaged in the project lifecycle as infrastructure is designed and ultimately inherit responsibility of the new environment. The primary responsibility of the team is to react to incidents in Cloud, PaaS, Cloud Foundry, Kubernetes and pipeline infrastructure platforms. Secondary objectives of the team are to improve the resiliency of systems through active contribution back to design as well as automating repetitive tasks. Requires a moderate understanding of various platforms used for delivery of traditional and cloud services. Requires the ability to write automation code to remediate repeat issues. Requires a working understanding of Continuous Delivery principles, software development methodologies and infrastructure automation. This is a “hands on” role that deals with reacting to and proactively avoiding issues with infrastructure platforms used for delivery of cloud services. Do you enjoy creating code-based solutions to replace manual processes? Are you the type of person that will drive the solution and team in the best direction instead of the easiest? Role Interact with and create effective monitoring systems that reduce the need for human intervention in daily web operations Respond to incidents in various platforms Create high quality and rugged code solutions to automate detection and recovery of common operational problems Identify and create automation that can be used by initial responders for easily resolvable issues All About You Expert understanding of Operational duties, Compensated On-call shifts. Working knowledge of target platforms (Cloud Foundry, Concourse, Kubernetes, Helm, Kustomize, KubeCTL, RKE, KubeADM) Experience with Python and Bash Scripting. Familiarity with F5 load balancers, API technology and XML Gateways preferred Highly attuned to security needs and best practices Ability to write quality automation code in more than one language Familiarity with multiple cloud vendors and past experience a plus Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-245067 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role In Systems Management at Kyndryl, you will be critical in ensuring the smooth operation of our customers’ IT infrastructure. You'll be the mastermind behind maintaining and optimizing their systems, ensuring they're always running at peak performance. Key Responsibilities: Develop and customise OpenTelemetry Collectors to support platform-specific instrumentation (Linux, Windows, Docker, Kubernetes). Build processors, receivers, and exporters in OTEL to align with Elastic APM data schemas. Create robust and scalable pipelines for telemetry data collection and delivery to Elastic Stack. Work closely with platform and application teams to enable auto-instrumentation and custom telemetry. Automate deployment of collectors via Ansible, Terraform, Helm, or Kubernetes operators. Collaborate with Elastic Observability team to validate ingestion formats, indices, and dashboard readiness. Benchmark performance and recommend cost-effective designs. Your Future at Kyndryl Kyndryl's focus on providing innovative IT solutions to its customers means that in Systems Management, you will be working with the latest technology and will have the opportunity to learn and grow your skills. You may also have the opportunity to work on large-scale projects and collaborate with other IT professionals from around the world. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience with Golang or similar languages in a systems development context. Deep understanding of OpenTelemetry Collector architecture, pipelines, and customization. Experience with Elastic APM ingestion endpoints and schema alignment. Familiarity with Docker, Kubernetes, system observability (eBPF optional but preferred). Hands-on with deployment automation tools: Ansible, Terraform, Helm, Kustomize. Strong grasp of telemetry protocols: OTLP, gRPC, HTTP, and metrics formats like Prometheus, StatsD. Strong knowledge of Elastic APM, Fleet, and integrations with OpenTelemetry and metric sources. Experience with data ingest and transformation using Logstash, Filebeat, Metricbeat, or custom agents. Proficiency in designing dashboards, custom visualizations, and alerting in Kibana. Understanding of ILM, hot-warm-cold tiering, and Elastic security controls. Preferred Technical and Professional Experience Contributions to OpenTelemetry Collector or related CNCF projects. Elastic Observability certifications or demonstrable production experience. Experience in cost modeling and telemetry data optimization. Exposure to Elastic Cloud, ECE, or ECK. Familiarity with alternatives like Dynatrace, Datadog, AppDynamics or SigNoz for benchmarking. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability. Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments. Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments. Support deployment of infrastructure lambda functions. Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment. Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role. Extensive experience with Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Ability to plan and execute zero-disruption migrations. Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC. Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK. Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD). Solid understanding of git, branching models, CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership. Experience with client relationship management and project planning. Certifications: Relevant certifications (for example Kubernetes Certified Administrator,AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional, etc). Software development experience (for example Terraform, Python). Experience with machine learning infrastructure. Education: B.Tech/B.E in computer sciences, a related field, or equivalent experience. Sandeep Kumar sandeep.vinaganti@quesscorp.com Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Cloud Engineer will be a part of the Engineering team and will require a strong knowledge of application monitoring, infrastructure monitoring, automation, maintenance, and Service Reliability Improvements. Specifically, we are searching for someone who brings fresh ideas, demonstrates a unique and informed viewpoint, and enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences at every interaction. Responsibilities Role & Responsibilities Design, automate and manage a highly available and scalable cloud deployment that allows development teams to deploy and run their services. Collaborating with engineering and Architects teams to evaluate and identify optimal cloud solutions, also leveraging scalability, high-performance and security. Design and implement sustainable cloud and platform services. Build a robust, scalable and stable infrastructure. Manage hosting external containers in Private cloud. Extensively automated deployments and managed applications in GCP. Developing and maintaining cloud solutions in accordance with best practices. Ensuring efficient functioning of data storage and processing functions in accordance with company security policies and best practices in cloud security. Collaborate with Engineering teams to identify optimization strategies, help develop self-healing capabilities Experience in developing a strong observability capabilities Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues. Regularly reviewing existing systems and making recommendations for improvements. Required Skills and Selection Criteria: Proven work experience in designing, deploying and operating mid to large scale public cloud environments. Proven work experience in Docker/Kubernetes (image building, k8s schedule) Experience in package, config and deployment management via Helm, Kustomize, ArgoCD. Proven working experience in onboarding and troubleshooting Cloud Services. Proven work experience in provisioning Infrastructure as Code (IaC) using Terraform Enterprise or community edition. Proven work experience in writing custom terraform providers/plug-ins with Sentinel Policy as Code Professional Certification is an advantage Public Cloud >> GCP is a good to have. Strong knowledge in Github, DevOps (Cloud Build is an advantage) Should be proficient in scripting and coding, that include traditional languages like Python, PowerShell, GoLang,Java, JS and Node.js. Proven working experience in Messaging Middleware - Apache Kafka, RabbitMQ, Apache ActiveMQ Proven working experience in API gateway, Apigee is an advantage. Proven working experience in API development, REST. Proven working experience in Sec and IAM, SSL/TLS, OAuth and JWT. Extensive knowledge and hands-on experience in Grafana and Prometheus micro libraries. Exposure to Cloud Monitoring and logging. Experience with distributed storage technologies like NFS, HDFS, Ceph, S3 as well as dynamic resource management frameworks (Mesos, Kubernetes, Yarn) Experience with automation tools should be a priority Qualifications Preferred Qualifications Previous success in technical engineering Must have > 5 overall experience Must have >3 years of experience in public cloud Must have >3 years of experience in Cloud Infrastructure provisioning Must have >3 years of experience in Cloud Engineering Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. Are you a passionate operations engineer who wants to work on solving large scale problems? Join us in building best in class solutions and implementing sophisticated software applications across IS&T. At Apple, we support both open source and home-grown technologies to provide internal Apple developers with the best possible CI/CD solutions. In this role you will have the unique opportunity to own and improve tooling for best of the class large-scale platform solutions to help build modern software systems! This role is primarily responsible for building and managing tools that enable software releases in a fast paced enterprise environment We operate with on-prem, private, and public cloud platforms. A DevOps Engineer would be partnering closely with global software development teams and infrastructure teams. Description As a virtue of being part of this team you would be exposed to a variety of challenges supporting and building highly available systems, working closely with U.S. and India based teams and have the opportunity to expand the capabilities the team has to offer to the wider organization. This may include: - Designing and implementing new solutions to streamline manual operations. - Triaging security and production issues along with other operational team members. Conduct root cause analysis of critical issues. - Expand the capacity and performance of current operational systems. The ideal candidate will be a self-motivated, hands-on, dynamic and detail oriented individual with a strong technical background. Minimum Qualifications 1-2 yrs of experience in software engineering Bachelor’s or Master’s degree (or equivalent) in Computer Science or a related field (equivalent practical experience) Key Qualifications Knowledge of software engineering standard processes. Understanding of the software architecture, deploy & optimize infrastructure across onprem and third-party cloud. Good foundation in at least one or more programming and scripting languages. Ability to support, troubleshoot and maintain aspects of infrastructure, including Compute, System, Network, Storage and datastore. Implement applications in private/public cloud infrastructure and container technologies, like Kubernetes and Docker. Experience with develop software tooling to deliver programmable infrastructure & environments and building CI/CD pipeline with tools like Terraform, CloudFormation, Ansible, and Kubernetes toolset (e.g, kubectl, kustomize). Preferred Qualifications Familiarity with build and deployment systems using Maven and GIT Familiarity with observability tools (e.g., Grafana, Splunk etc) is a plus Experience or interest in automation is a huge plus Self-motivated, independent, and dedicated with great organizational skills Excellent written and verbal communication skills. Submit CV
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies. Show more Show less
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role: Digital : Kubernetes Experience range: 6-10 years Location: Bangalore/Pune/Hyderabad/Kochi Job description: Technical Skills Kubernetes (Advanced) Docker Helm Terraform CI/CD Tools (Jenkins, GitLab CI, GitHub Actions) Cloud Platforms Monitoring Tools (Prometheus, Grafana) Networking and Security Scripting Languages Required Qualifications 5+ years of DevOps experience with a strong focus on Kubernetes Expert-level understanding of Kubernetes architecture and ecosystem Proficiency in containerization technologies (Docker, containerd) Advanced scripting skills (Bash, Python, Go) Experience with cloud platforms (AWS, Azure, or GCP) Strong understanding of networking, security, and system administration Expertise in CI/CD pipeline design and implementation Proven experience with Infrastructure as Code Responsibilities Design and implement scalable Kubernetes architectures across multiple environments (development, staging, production) Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions Create and manage Infrastructure as Code (IaC) using Terraform, Ansible, or CloudFormation Implement and optimize container orchestration strategies Develop and maintain Helm charts and Kustomize configurations Configure and manage cloud infrastructure on AWS, Azure, or GCP Implement robust monitoring and observability solutions using Prometheus, Grafana, and ELK stack Ensure high availability, disaster recovery, and security of Kubernetes clusters Conduct performance tuning and optimize resource utilization Implement and manage service mesh technologies (e.g., Istio) Provide technical leadership and mentorship to development and operations teams Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
India
On-site
In the minute it takes you to read this job description, Bluecore has launched over 100,000 individually personalized marketing campaigns for our retail ecommerce customers! Senior Software Engineer - Platform Engineering We are looking for a Senior Software Engineer - Platform Engineering to help our engineering teams build scalable, extensible, reliable, and performant systems. The role will be hands-on: optimizing our Kubernetes clusters, managing our GCP infrastructure, and improving our DevOps and SRE practices. Learn more about our automation with this blog post on Argo, Kustomize and Config Connector. Bluecore ingests 100’s of millions of events per day, sends millions of personalized emails, and manages hundreds of terabytes of data. We use Google Cloud hosted infrastructure services including Google App Engine, Kubernetes/GKE, BigQuery, PubSub and Cloud SQL. Our stack consists primarily of Python and Golang on the backend with gRPC services, and JavaScript (React) on the frontend. We emphasize a culture of making good tradeoffs, working as a team, and leaving your ego at the door. Bluecore’s Engineering team is made up of exemplary engineers who believe in working collaboratively to solve complex technical problems and build creative solutions that are as simple as possible, but as powerful as necessary. You will be hands-on managing hundreds of servers with Infrastructure-as-Code tools (Terraform, Config Connector) and optimizing our multi-zone and multi-region network topology. You will be responsible for designing, building and supporting automation tools to help developers safely manage release operations and highly-available systems. You will be maintaining Bluecore’s shared libraries providing best practices for service development. You will be responsible for our system and application level security practices, from container scanning, to RBAC, to service and user level AuthN and AuthZ systems. Responsibilities Manage hundreds of servers and thousands of containers on our Google Kubernetes Engine clusters using various automation tools. Manage zero-trust networks using Kyverno, Cilium, and Istio. Develop and scale our Observability strategy with OpenTelemetry, Chronosphere, and other Saas Products. Build performant, reliable, high-quality systems at scale within one or more domains (Observability, Networking, Kubernetes, etc.). Comfort in providing significant contributions, even within domains with less expertise. Collaborate and build large and complex projects with difficult and intertwined dependencies that push for smart automation and innovation. Promote coding standards, styles, and design principles across the organization. Ad Hoc and incident availability as part of a greater team rotation. Ad Hoc requests are triaged to support the greater Engineering team’s needs. Incident coverage involves troubleshooting and assisting during an incident, bringing it to resolution. Ad Hoc/incident rotations are one week long on dayside hours. Proactively identify technology opportunities for the company, and push technical ideas, proposals, and plans to the entire organization and beyond. Provide perspective and advocate toward Platform Engineering’s technical strategy and decisions. Advise and advocate for best tools, methods, and approaches for the entire engineering organization. Evangelize Bluecore Engineering internally and externally, including leading external initiatives to promote Bluecore Engineering in the wider community. Requirements 4-7 years of software engineering experience, primarily in systems and infrastructure management. Hands-on experience maintaining Kubernetes at scale and running various workloads on Kubernetes. Preferred - experience in designing and implementing progressive improvements within API Gateways (Gloo), Istio, and Continuity scopes. Designing and maintaining network infrastructure, including cloud-based load balancing and ingress/egress design. Experience with metric and alerting tools, and experience with distributed tracing systems. Capable of owning engineering and company level issues or gaps, and successfully planning and executing towards resolution, all while continuously identifying and exploring additional opportunities and proactively preventing upcoming risks. Experience developing technical roadmaps and estimates that have pushed product and business growth over several quarters. Has a proven track record of successfully completing work that spans multiple teams and quarters, creating large impacts to business success. Experience excelling within a high-growth, startup environment or building out a new team/function within a larger company preferred. Experience with technical team mentorship, including guiding other engineers to become more effective technical leaders and providing feedback on best practices in code and design. Able to identify your team's dependencies on other areas and across functions, communicate effectively to remove any immediate blockers, and propose and implement process changes that strengthen and maintain efficiency. Adept at effectively communicating with external customers about critical production issues, as well as working with internal stakeholders from Bluecore’s customer success and support teams to propose and implement immediate fixes and resolution plans. More About Us Bluecore is a multi-channel personalization platform that gives retailers a competitive advantage in a digital-first world. Unlike systems built for mass marketing and a physical-first world, Bluecore unifies shopper and product data in a single platform, and using easy-to-deploy predictive models, activates welcomed one-to-one experiences at the speed and scale of digital. Through Bluecore’s dynamic shopper and product matching, brands can personalize 100% of communications delivered to consumers through their shopping experiences, anywhere . This Comes To Life In Three Core Product Lines Bluecore Communicate™ a modern email service provider (ESP) + SMS Bluecore Site™ an onsite capture and personalization product Bluecore Advertise™ a paid media product At Bluecore we believe in encouraging an inclusive environment in which employees feel encouraged to share their unique perspectives, demonstrate their strengths, and act authentically. We know that diverse teams are strong teams, and welcome those from all backgrounds and varying experiences. Bluecore is a proud equal opportunity employer. We are committed to fair hiring practices and to building a welcoming environment for all team members. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, familial status or veteran status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Analyzes, tests and assists with the integration of new applications. Documents all development activity. Assists with training non-technical personnel. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as a resource for colleagues with less experience. Job Description Core Responsibilities Job Description Position: Cloud DevOps Engineer 3 Experience: 5 years to 7 years Job Location: Chennai Tami Nadu HR Contact: Ramesh_M2@comcast.com Technical Skills: Must Have : Python, Terraform, Docker and Kubernetes, CICD, AWS, Bash, Linux/Unix, Git, DBMS (e.g. MySQL), NoSQL (e.g. MongoDB) Good to have: Ansible, Helm, Prometheus, ELK stack, R, GCP/Azur Key Responsibilities Design, build, and maintain efficient, reusable, and reliable code Work with analysis, operations, and test teams to achieve the best possible outcome within time and budget Troubleshoot infrastructure issues Attend cloud engineering meetings Participate in code reviews and quality assurance activities Participate in estimation discussions with the product team Continuously improve knowledge and coding skills Qualifications & Requirements Bachelor’s degree in computer science, Engineering, or a related field. experience in a scripting language (e.g. Bash, Python) 3+ years of hands-on experience with Docker and Kubernetes 3+ years of hands-on experience with CI tools (e.g. Jenkins, GitLab CI, GitHub Actions, Concourse CI, ...) 2+ years of hands-on experience with CD tools (e.g. ArgoCD, Helm, kustomize) 2+ years of hands-on experience with LINUX/UNIX systems 2+ years of hands-on experience with cloud providers (e.g. AWS, GCP, Azure) 2+ years of hands-on experience with one IAC framework (e.g. Terraform, Pulumi, Ansible) Basic knowledge of virtualization technologies (e.g. VMware) is a plus Basic knowledge of one database (MySQL, SQL Server, Couchbase, MongoDB, Redis, ...) is a plus Basic knowledge of GIT and one Git Provider (e.g. GitLab, GitHub) Basic knowledge of networking Experience writing technical documentation. Good Communication & Time Management Skills. Able to work independently and as part of a team. Analytical thinking & Problem-Solving Attitude. Disclaimer This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Kubernetes Developer Location: On-Site (Gurugram) Employment Type: Full-Time Experience Level: 7+ Years Job Summary: We are seeking a highly skilled Senior Kubernetes Developer with 7+ years of experience in container orchestration, cloud-native application development, and infrastructure automation. The ideal candidate will have deep expertise in Kubernetes architecture and development, and a solid understanding of cloud platforms, DevOps practices, and distributed systems. You will be instrumental in designing, building, and optimizing scalable Kubernetes-based infrastructure and applications. Key Responsibilities: Design, develop, and maintain Kubernetes-based infrastructure and microservices architecture. Build and manage containerized applications using Docker and deploy them via Kubernetes. Develop custom Kubernetes controllers/operators using Go or Python as needed. Implement CI/CD pipelines integrated with Kubernetes for automated testing and deployment. Collaborate with DevOps and cloud infrastructure teams to ensure secure and scalable solutions. Troubleshoot and optimize performance, availability, and reliability of Kubernetes clusters. Monitor cluster health and implement observability tools (Prometheus, Grafana, etc.). Ensure best practices around Kubernetes security, networking, and configuration management. Contribute to internal documentation, design reviews, and knowledge sharing sessions. Required Qualifications: 7+ years of experience in software development or infrastructure engineering. Minimum 4 years of hands-on experience with Kubernetes in production environments. Proficiency with container technologies (Docker) and Kubernetes orchestration. Experience with Helm, Kustomize, and Kubernetes Operators. Strong knowledge of Linux systems and shell scripting. Experience in at least one programming language (Go, Python, or Java preferred). Familiarity with infrastructure as code tools like Terraform, Pulumi, or Ansible. Working knowledge of cloud platforms such as AWS, GCP, or Azure. Understanding of service mesh (Istio, Linkerd) and ingress controllers (NGINX, Traefik). Preferred Qualifications: Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD). Experience with GitOps tools (ArgoCD, Flux). Familiarity with DevSecOps practices and security hardening. Exposure to event-driven architectures and message brokers (Kafka, NATS). Strong understanding of networking, DNS, load balancing, and security in Kubernetes. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have: Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps ? 1. Register on our Soul AI website. 2. Our team will review your profile. 3 . Clear all the screening round s: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4 . Profile matching and Project Allocatio n: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Kubernetes Platform Engineer/Consultant Specialist. In this role, you will: Build and manage the HSBC GKE Kubernetes Platform to easily let application teams deploy to Kubernetes. Mentor and guide support engineers, represent the platform technically through talks, blog posts and discussions Engineer solutions on HSBC GKE Kubernetes Platform using Coding, Automation and Infrastructure as Code methods (e.g. Python, Tekton, Flux, Helm, Terraform, …). Manage a fleet of GKE clusters from a centrally provided solution Ensure compliance with centrally defined security controls and with operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency and Containers). Ensure good Change management practice is implemented as specified by central standards. Provide impact assessments where requested for changes proposed on HSBC GCP core platform. Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities. Engineering activities to implement patches for VMs and containers provided centrally Support non-functional testing Update support and operational documentation as required Fault find and support Applications teams On a rotational on call basis provide out of business hours support as part of our 24 x 7 coverage Requirements To be successful in this role, you should meet the following requirements: Demonstrable Kubernetes and Cloud Native experience – building, configuring and extending Kubernetes platforms Automation scripting (using scripting languages such as Terraform, Python etc.) Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools Experience of working with Kubernetes resource configuration tooling (Helm, Kustomize, kpt) Experience working within an Agile environment Programming experience in one or more of the following languages: Python or Go Ability to quickly acquire new skills and tools You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less
Posted 3 weeks ago
4.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Responsibilities: Execute shell scripts for seamless automation and system management. Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl. Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration. Contribute to Opentelemetry Collector for enhanced observability. Implement microsegmentation using AWS native resources and Aviatrix for commercial routes. Enforce policies through Open Policy Agent (OPA) integration. Develop and maintain comprehensive runbooks for standard operating procedures. Utilize packet tracing for network analysis and security optimization. Apply OWASP tools and practices for robust web application security. Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines. Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines. Collaborate with software and platform engineers to infuse security principles into DevOps teams. Regularly monitor and report project status to the management team. Qualifications: Proficient in shell scripting and automation. Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl. Deep understanding of AWS security practices, VPC configurations, and Aviatrix. Familiarity with Opentelemetry for observability and OPA for policy enforcement. Experience in packet tracing for network analysis. Practical application of OWASP tools and web application security. Integration of container vulnerability scanning tools within CI/CD pipelines. Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines. Collaboration expertise with DevOps teams for security integration. Regular monitoring and reporting capabilities. Site Reliability Engineering experience. Hands-on proficiency with source code management tools, especially Git.
Posted 3 weeks ago
30.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Today’s world is crime-riddled. Criminals are everywhere, invisible, virtual and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough… Technology is changing incredibly fast. The criminals know it, and they are taking advantage. We know it too. For nearly 30 years, the incredible minds at Cognyte around the world have worked closely together and put their expertise to work, to keep up with constantly evolving technological and criminal trends, and help make the world a safer place with leading investigative analytics software solutions. We are defined by our dedication to doing good and this translates to business success, meaningful work friendships, a can-do attitude, and deep curiosity. So, if you rock at DevSecOps and being a technical expert, and want in on the action, let’s talk! Role Overview: This role focuses on integrating security best practices into CI/CD pipelines and production system deployments, ensuring security is embedded throughout the software development lifecycle. As a DevSecOps Engineer, you will work closely with architecture, development, and operations teams to make security a shared responsibility across all stages of software development and deployment. Your primary responsibility will be implementing security best practices, testing, and automation tools into CI/CD pipelines and production environments using industry-standard tools such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and other security mechanisms. Key Responsibilities : · Security Integration into DevOps: Collaborate with development and operations teams to integrate security practices into every stage of the software development lifecycle, from code creation to deployment. · CI/CD Pipeline Security: Configure, implement, and manage security tools and automation in CI/CD pipelines to detect vulnerabilities early in the development process. · Security Testing: Use SAST and DAST tools to automate security testing for code and applications. Continuously monitor security scans, report findings, and recommend remediation strategies. · Automation & Process Improvement: Continuously enhance and automate security processes to deliver secure software efficiently while minimizing manual intervention. Experience Required: 3+ years of experience in DevOps or a similar role focused on integrating security into CI/CD processes. Proven experience implementing and configuring security tools such as SAST, DAST, and other automation tools. Strong hands-on experience with CI/CD tools and languages (e.g., Jenkins, Groovy, Git, Python, Bash) for pipeline automation. Proficiency in cloud-native deployments and management (e.g., Helm, Kustomize), Kubernetes objects, and cluster debugging. Familiarity with Infrastructure as Code (IaC) tools like Terraform and Ansible. Knowledge of CIS benchmark recommendations and system hardening practices. Technical Skills : Proficiency in programming/scripting languages (e.g., Python, Bash, Groovy, Ansible, Helm) for automation. In-depth knowledge of security vulnerabilities (e.g., OWASP Top 10) and mitigation best practices. Experience with vulnerability scanning and static and dynamic application security testing tools (e.g., SonarQube, Checkmarx, OWASP ZAP, Coverity, Lint). Familiarity with on-premises cloud platforms (e.g., OpenShift, Tanzu) and public cloud platforms (AWS, Azure, GCP) and their security configurations. Soft Skills : Strong communication skills to effectively collaborate with cross-functional teams. A problem-solving mindset with the ability to quickly troubleshoot and resolve security issues. A proactive and collaborative approach to fostering a security-first mindset across the organization. We believe that diverse teams drive the greatness of ideas, products, and companies. Whatever your race, gender, age, creed, or taste in music – if you’ve got the drive, commitment, and dedication to be the best, do your best, and work with the best, then come join us. We’re waiting for you. Curious? Apply now. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Harness is a high-growth company that is disrupting the software delivery market. Our mission is to enable the 30 million software developers in the world to deliver code to their users reliably, efficiently, securely and quickly, increasing customers’ pace of innovation while improving the developer experience. We offer solutions for every step of the software delivery lifecycle to build, test, secure, deploy and manage reliability, feature flags and cloud costs. The Harness Software Delivery Platform includes modules for CI, CD, Cloud Cost Management, Feature Flags, Service Reliability Management, Security Testing Orchestration, Chaos Engineering, Software Engineering Insights and continues to expand at an incredibly fast pace. Harness is led by technologist and entrepreneur Jyoti Bansal, who founded AppDynamics and sold it to Cisco for $3.7B. We’re backed with $425M in venture financing from top-tier VC and strategic firms, including J.P. Morgan, Capital One Ventures, Citi Ventures, ServiceNow, Splunk Ventures, Norwest Venture Partners, Adage Capital Partners, Balyasny Asset Management, Gaingels, Harmonic Growth Partners, Menlo Ventures, IVP, Unusual Ventures, GV (formerly Google Ventures), Alkeon Capital, Battery Ventures, Sorenson Capital, Thomvest Ventures and Silicon Valley Bank. Position Summary In this role, you will be working with internal and external stakeholders to architect, design and implement DevSecOps, FinOps and Engineering Excellence solutions for enterprise customers. You will have an opportunity to work with Harness Engineering and various customer functions, such as DevOps, SRE, Cloud, Finance and Engineering Analytics teams. You will develop best practices and automations to streamline Harness platform deployments in the most efficient, scalable, repeatable and reliable manner possible. We're a high-growth company on a once-in-a-lifetime journey to revolutionize engineering deployment tools & continuous delivery. Key Responsibilities Engage with our customer's technical teams to analyze and understand current DevSecOps/CI/CD/Policy & Template Governance tools and processes Architect and implement an optimized Harness setup for integration, scale, and repeatability Interface with the Customer's Executive and Leadership teams to understand the technical goals and business objectives related to their CI/CD process, design their Harness implementation to best fit those requirements, and correlate the technical success criteria to the business requirements Provide positive anecdotes from each engagement, craft best practices around Customer implementations, convert them into automation and create reference patterns Document and implement processes and solutions that are employed for onboarding success for the purpose of internal enablement Contribute to the product design, assist in the Harness Community, and for building out of an advanced technical knowledge base Consult on DevSecOps/CI/CD best practices, processes, solutions, etc. Interact with customers on a professional, meaningful and technically deep level Work closely with Pre-sales and Post-sales teams to ensure that Harness customers are successful and experience a high level of customer satisfaction with the Harness solution. About You BA/BS degree in CS or Computer Engineering-related field with 3+ years of relevant experience 5+ Experience with DevOps and including some multiple of the following solutions preferred: Kubernetes, Jenkins, GitHub, Gitlab, Bamboo, TeamCity, TravisCI, Bitbucket, Jira, ServiceNow, Helm, Kustomize, PCF, OpenShift, AWS, GCP, Azure, Terraform, CloudFormation, Linux, Python, Bash, Powershell, AppDynamics, New Relic, Dynatrace, Instana, Prometheus, ELK, Splunk, Sumo Logic, etc. Experience delivering custom solutions to customers of all sizes, whether internal or external (external customer-facing experience a plus). You are a perpetual learner, thrive in a team setting, enjoy sharing your experience and solutions, consistently pursuing excellence and success in all your tasks, detail-oriented and analytical, with excellent written and verbal communication skills. Results-driven individual with a hunger for accomplishing in fast paced environments and a knack for optimizing processes Willingness to travel up to 25% What You Will Have At Harness Experience building a transformative product End-to-end ownership of your projects Competitive salary Comprehensive healthcare benefit Flexible work schedule Paid Time Off and Parental Leave Monthly, quarterly, and annual social and team building events Monthly internet reimbursement Harness In The News Harness Grabs a $150m Line of Credit Welcome Split! SF Business Times - 2024 - 100 Fastest-Growing Private Companies in the Bay Area Forbes - 2024 America's Best Startup Employers SF Business Times - 2024 Fastest Growing Private Companies Awards Fast Co - 2024 100 Best Workplaces for Innovators All qualified applicants will receive consideration for employment without regard to race, color, religion, sex or national origin. Note on Fraudulent Recruiting/Offers We have become aware that there may be fraudulent recruiting attempts being made by people posing as representatives of Harness. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note, we do not ask for sensitive or financial information via chat, text, or social media, and any email communications will come from the domain @harness.io. Additionally, Harness will never ask for any payment, fee to be paid, or purchases to be made by a job applicant. All applicants are encouraged to apply directly to our open jobs via our website. Interviews are generally conducted via Zoom video conference unless the candidate requests other accommodations. If you believe that you have been the target of an interview/offer scam by someone posing as a representative of Harness, please do not provide any personal or financial information and contact us immediately at security@harness.io. You can also find additional information about this type of scam and report any fraudulent employment offers via the Federal Trade Commission’s website (https://consumer.ftc.gov/articles/job-scams), or you can contact your local law enforcement agency. Show more Show less
Posted 3 weeks ago
5 years
0 Lacs
Kochi, Kerala, India
On-site
Job Position : Full Stack DevelopmentLocation : KochiExperience : 5 + YearsNotice Period : Immediate Joiner We are seeking a dynamic and experienced Technical Team Lead to lead our local IT Services Management Team and actively contribute as a Full Stack Developer. This hybrid role blends technical leadership with hands-on development responsibilities, ensuring the stability, scalability, and performance of a critical B2C application.You will be responsible for maintaining application SLAs, managing incidents and changes, driving continuous improvements, and coordinating closely with our Senior Project Manager in Germany. The ideal candidate brings a proactive mindset, strong leadership, and full-stack development experience—especially in React, Spring Boot, and DevOps practices. Key ResponsibilitiesTeam Leadership & IT Service ManagementOversee daily operations of the IT Services team.Ensure applications meet SLA requirements and customer expectations.Coordinate Support & Enhancement activities, including on-call rotations.Collaborate with international stakeholders, including the German project team.Application Lifecycle & Incident ManagementLead incident resolution and root cause analysis.Coordinate software releases, updates, and change implementations.Apply ITIL best practices to optimize service quality.Software DevelopmentContribute to full-stack development tasks (Frontend, Backend, and DevOps).Deliver clean, scalable, and tested code using modern technologies.Support CI/CD pipelines and deployment automation.Required QualificationsEducation: Bachelor's or Master’s degree in Computer Science, Engineering, or related field.Experience: Minimum 5 years of professional experience in software development and IT service management.Core Skills & CompetenciesStrong understanding of ITIL frameworks.Experience with Agile and Kanban methodologies.Excellent communication and stakeholder management skills in English.Technical ExpertiseFrontend (React Ecosystem)React JS (v17+), React Router, Redux, Tanstack, ContextStorybookTesting: Jest, React Testing Library, Vitest, CypressBuild tools: Webpack, ViteBackend (Java & Spring Boot)Java 21, Spring Boot 3.x, Spring Security, JPARESTful APIs, OpenAPI, SOAP, Keycloak, RedisAzure Services (DevOps, Service Bus, Cache)DevOps & InfrastructureKubernetes (kubeAPI, Kustomize), Docker/ContainersHelm, ArgoCD, Azure DevOps PipelinesDebugging Tools: Curl, Telnet, Traceroute, OpenSSLSecurity: SonarQube, Trivy, Kubernetes & Container HardeningAzure Cloud Stack (Frontdoor, WAF, AKS, ACR, Keyvault, etc.)SSL/TLS & Certificate Management, LetsEncryptLinux Shell Scripting
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2