Jobs
Interviews

64 Infrastructureascode Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Join our Cyber Tech Assurance team and you will have the opportunity to work in a collaborative and dynamic environment whilst driving transformation initiatives within the organization. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You'll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. In this role, you will identify potential threats, validate security controls against Macquarie's standards and industry frameworks, and document security assessment results. Additionally, you will collaborate with stakeholders to provide security advisory, assess risk severity, and recommend remediation strategies. What You Offer: - 3 - 5 years in cybersecurity consulting, architecture, or IT auditing, with a preference for strong security engineering expertise - Proficiency in security architecture, infrastructure-as-code, CI/CD, vulnerability management, and secure application development - Familiarity with public cloud platforms, containers, Kubernetes, and related technologies - Knowledge of industry standards (e.g., NIST, COBIT, ISO) and evolving threat landscapes - Industry-recognized credentials (e.g., CISSP, CISM, SABSA, OSCP, or cloud certifications) are highly valued We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. Benefits at Macquarie offer a wide range of benefits including: - 1 wellbeing leave day per year - 26 weeks paid maternity leave or 20 weeks paid parental leave for primary caregivers along with 12 days of paid transition leave upon return to work and 6 weeks paid leave for secondary caregivers - Company-subsidised childcare services - 2 days of paid volunteer leave and donation matching - Benefits to support your physical, mental and financial wellbeing including comprehensive medical and life insurance cover, the option to join parental medical insurance plan and virtual medical consultations extended to family members - Access to our Employee Assistance Program, a robust behavioral health network with counseling and coaching services - Access to a wide range of learning and development opportunities, including reimbursement for professional membership or subscription - Hybrid and flexible working arrangements, dependent on role - Reimbursement for work from home equipment Technology enables every aspect of Macquarie, for our people, our customers, and our communities. We're a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrow's technology solutions. Our commitment to diversity, equity, and inclusion is reflected in our aim to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As an experienced individual with 5+ years of relevant experience, you will be responsible for defining and championing the strategic vision and roadmap for CI/CD pipelines in an offshore role. Your primary focus will be on creating a scalable, resilient, and secure architecture for digital engineering pipelines while promoting DevOps best practices. Your duties and responsibilities will include leading the architectural design of end-to-end CI/CD workflows, implementing advanced CI/CD pipelines using GitHub Actions, and configuring Azure resources such as Container Apps, Storage, Key Vault, and Networking. You will also integrate and optimize tools like SonarQube for code quality and Snyk for security scanning, develop specialized CI/CD processes for SAP Hybris e-commerce platform, and automate change management processes with Jira integration. Furthermore, you will design full-stack caching/CDN solutions using Cloudflare, develop and maintain automation scripts and infrastructure-as-code, implement automated testing strategies, embed DevSecOps practices, ensure compliance with security policies and regulatory requirements, and monitor/optimize CI/CD pipeline performance. Collaborating with various teams including development, operations, QA, and security, you will mentor teams to foster an automation and shared responsibility culture, as well as provide expert-level troubleshooting and root cause analysis. In terms of skills and competencies, you should have expertise in GitHub Actions CI/CD orchestration, Azure proficiency, Snyk security scanning, SonarQube integration, Infrastructure-as-Code, test automation integration, DevSecOps implementation, Jira integration, Cloudflare management, CI/CD monitoring/optimization, troubleshooting, compliance understanding, and cross-functional collaboration. Experience with SAP Hybris CI/CD automation and Docker containerization would be advantageous. Overall, you will play a critical role in shaping and implementing the CI/CD strategy and architecture, ensuring high-quality, secure, and efficient delivery of digital solutions while fostering a culture of automation, collaboration, and innovation. Please note that the expected onboarding date for this position is September 1, 2025, and the work location is either Trivandrum or Kochi. A degree qualification is also required for this role.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Python Solution Architect with over 10 years of experience, you will play a crucial role in designing and implementing scalable, high-performance software solutions that align with business requirements. Your expertise in Python frameworks (e.g., Django, Flask, FastAPI) will be instrumental in architecting efficient applications and microservices architectures. Your responsibilities will include collaborating with cross-functional teams to define architecture, best practices, and oversee the development process. You will be tasked with ensuring that Python solutions meet business goals, align with enterprise architecture, and adhere to security best practices (e.g., OWASP, cryptography). Additionally, your role will involve designing and managing RESTful APIs, optimizing database interactions, and integrating Python solutions seamlessly with third-party services and external systems. Your proficiency in cloud environments (AWS, GCP, Azure) will be essential for architecting solutions and implementing CI/CD pipelines for Python projects. You will provide guidance to Python developers on architectural decisions, design patterns, and code quality, while also mentoring teams on best practices for writing clean, maintainable, and efficient code. Preferred skills for this role include deep knowledge of Python frameworks, proficiency in asynchronous programming, experience with microservices-based architectures, and familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Your understanding of relational and NoSQL databases, RESTful APIs, cloud services, CI/CD pipelines, and Infrastructure-as-Code tools will be crucial for success in this position. In addition, your experience with security tools and practices, encryption, authentication, data protection standards, and working in Agile environments will be valuable assets. Your ability to communicate complex technical concepts to non-technical stakeholders and ensure solutions address both functional and non-functional requirements will be key to delivering successful projects.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for owning the lifecycle of OpenShift clusters, both on-prem and in the public cloud. This includes tasks such as architecture design, provisioning, upgrades, patching, and ensuring high availability. Additionally, you will administer and support OpenShift Container Platform as well as other Kubernetes-based environments like EKS, AKS, and GKE. As part of your role, you will serve as the primary escalation point for complex incidents, ensuring root cause resolution and driving continuous improvement. You will oversee cluster-wide networking, storage integration, ingress/egress configurations, and ensure the secure exposure of workloads. In the event of incidents, you will lead incident response efforts, coordinate stakeholders, and conduct post-mortem reviews with precision. You will also assist in change management for container platform lifecycle events, which includes tasks such as OpenShift version upgrades, SysAdmin responsibilities, hotfix deployments, and feature enhancements. Your contribution to Airtel's Enterprise Container Strategy will involve identifying opportunities for performance optimization, availability improvements, and enhancing resiliency. In the realm of architecture, configuration, and automation, you will architect GitOps-driven CI/CD workflows using tools like Argo CD, Tekton Pipelines, AirFlow, Helm, and S2I. You will lead the implementation and optimization of monitoring and alerting systems using technologies such as Prometheus, Grafana, Alert manager, and the ELK stack. Automation of operational processes using Python, Bash, and Ansible will be crucial to reduce manual toil and enhance system resilience. It will also be your responsibility to ensure configuration consistency using Infrastructure-as-Code tools such as Terraform and Ansible. Regarding security, compliance, and governance, you will define and enforce security policies including RBAC, Network Policies, Pod Security Policies, and image scanning tools. Leading security assessments, remediation of vulnerabilities, and enforcing policies aligned with compliance mandates will be part of your duties. You will collaborate with the Information Security (InfoSec) teams to implement audit logging, incident response controls, and measures to harden containers. In the realm of networking, storage, and system integration, you will lead advanced OpenShift networking operations such as ingress controller tuning, multi-tenant isolation, MetalLB, hybrid DNS, service meshes (Istio), and egress control. Integration of persistent storage solutions like Ceph, SAN/NAS, and Object Storage using CSI drivers, dynamic provisioning, and performance tuning will also fall under your purview.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

The Max Maintenance team is currently in search of an experienced Principal Software Architect to take charge of leading the modernization and cloud transformation of a legacy .NET web application with a SQL Server backend. This role necessitates a profound understanding of AWS cloud services, including API Gateway, AWS Lambda, Step Functions, DynamoDB, and Neptune, in order to re-architect the system into a scalable, serverless, event-driven platform. The ideal candidate for this position will possess a robust architectural vision, hands-on technical proficiency, and a dedication to mentoring and guiding development teams through digital transformation initiatives. Are you someone who thrives in a fast-paced and dynamic team environment If so, we invite you to join our diverse and motivated team. Key Responsibilities: - Lead the comprehensive cloud transformation strategy for a legacy .NET/SQL Server web application. - Develop and deploy scalable, secure, and serverless AWS-native architectures with services like API Gateway, AWS Lambda, Step Functions, DynamoDB, and Neptune. - Establish and execute data migration plans, transitioning relational data models into NoSQL (DynamoDB) and graph-based (Neptune) storage paradigms. - Set standards for infrastructure-as-code, CI/CD pipelines, and monitoring utilizing AWS CloudFormation, CDK, or Terraform. - Offer hands-on technical guidance to development teams, ensuring high code quality and compliance with cloud-native principles. - Assist teams in adopting cloud technologies, service decomposition, and event-driven design patterns. - Mentor engineers in AWS technologies, microservices architecture, and best practices in DevOps and modern software engineering. - Develop and evaluate code for critical services, APIs, and data access layers using appropriate languages (e.g., Python, Node.js). - Create and implement APIs for both internal and external consumers, ensuring secure and dependable integrations. - Conduct architecture reviews, threat modeling, and enforce strict testing practices, including automated unit, integration, and load testing. - Collaborate closely with stakeholders, project managers, and cross-functional teams to define technical requirements and delivery milestones. - Translate business objectives into technical roadmaps and prioritize technical debt reduction and performance enhancements. - Engage stakeholders to manage expectations and provide clear communication on technical progress and risks. - Stay informed about AWS ecosystem updates, architectural trends, and emerging technologies. - Assess and prototype new tools, services, or architectural approaches that can expedite delivery and decrease operational complexity. - Advocate for a DevOps culture emphasizing continuous delivery, observability, and security-first development. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Minimum of 8 years of software development experience, with at least 3 years focused on architecting cloud-native solutions on AWS. - Proficiency in AWS services like API Gateway, Lambda, Step Functions, DynamoDB, Neptune, IAM, CloudWatch. - Experience in legacy application modernization and cloud migration. - Strong familiarity with the .NET stack and the ability to map legacy components to cloud-native equivalents. - Extensive knowledge of distributed systems, serverless design, data modeling (both relational and NoSQL/graph), and security best practices. - Demonstrated leadership and mentoring skills within agile software teams. - Exceptional problem-solving, analytical, and decision-making capabilities. The oil and gas industry's top professionals leverage over 150 years of combined experience every day to assist customers in achieving enduring success. We Power the Industry that Powers the World Our family of companies has delivered technical expertise, cutting-edge equipment, and operational assistance across every region and aspect of drilling and production, ensuring current and future success. Global Family We operate as a unified global family, comprising thousands of individuals working together to make a lasting impact on ourselves, our customers, and the communities we serve. Purposeful Innovation Through intentional business innovation, product development, and service delivery, we are committed to enhancing the industry that powers the world. Service Above All Our commitment to anticipating and meeting customer needs drives us to deliver superior products and services promptly and within budget.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

The Senior Architect - Cloud Cybersecurity role at United Airlines Business Services Pvt. Ltd involves supporting the maturing of cloud security capabilities for United Airlines. As part of a cross-disciplinary Cyber team, you will be responsible for full stack security architecture, including the creation and maintenance of security documentation. Collaborating with various teams, you will ensure the adoption of security best practices throughout the application lifecycle. Additionally, you will conduct threat modeling, develop cloud security policies, and act as a subject matter expert in various technology domains. Key Responsibilities: - Develop security documentation and partner with teams to ensure the adoption of security best practices - Conduct threat modeling and design cloud security policies for different types of clouds - Act as a subject matter expert in multiple technology domains - Partner with stakeholders to understand business requirements and develop security principles - Develop security architecture strategies and collaborate with cybersecurity teams - Define and implement security standards and frameworks - Train and coach development and engineering teams on integrating security tools - Champion secure infrastructure-as-code for the success of cloud environments - Lead continuous improvement efforts and mentor junior team members Minimum Qualifications: - Bachelor's degree - 6+ years of technical experience related to cloud - Working knowledge of cloud service providers - Proficiency in cloud security frameworks and best practices - Knowledge of security protocols, cryptography, and network architectures - Proficiency in scripting and automation tools - Understanding of IAM, network security, and data encryption - Knowledge of compliance standards - Proficiency in security automation and orchestration - Ability to work independently and excellent communication skills - Legally authorized to work in India without sponsorship - Fluent in English Preferred Qualifications: - Master's degree - Relevant certifications (e.g., CCIE, CISSP) - AWS Solution Architect Pro., Networking, and Security Specializations - Experience in Kubernetes security - 8+ years of technical experience, with 5 years in cloud technology Successful completion of the interview process is necessary for this role. This position does not offer expatriate assignments or sponsorship for employment visas.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chandigarh

On-site

We are seeking a Cloud Transition Engineer to implement cloud infrastructure and services as per approved architecture designs, ensuring a smooth transition of these services into operational support. You will play a crucial role in bridging the gap between design and operations, ensuring the efficient, secure, and fully supported delivery of new or modified services. Collaborating closely with cross-functional teams, you will validate infrastructure builds, coordinate deployment activities, and ensure all technical and operational requirements are met. This position is vital in maintaining service continuity and enabling scalable, cloud-first solutions across the organization. Your responsibilities in this role will include implementing Azure infrastructure and services based on architectural specifications, building, configuring, and validating cloud environments to meet project and operational needs, collaborating with various teams to ensure smooth service transitions, creating and maintaining user documentation, conducting service readiness assessments, facilitating knowledge transfer and training for support teams, identifying and mitigating risks related to service implementation and transition, ensuring compliance with internal standards, security policies, and governance frameworks, supporting automation and deployment using tools like ARM templates, Bicep, or Terraform, and participating in post-transition reviews and continuous improvement efforts. To be successful in this role, you should possess a Bachelor's degree in computer science, Information Technology, Engineering, or a related field (or equivalent experience), proven experience in IT infrastructure or cloud engineering roles with a focus on Microsoft Azure, demonstrated experience in implementing and transitioning cloud-based solutions in enterprise environments, proficiency in infrastructure-as-code tools such as ARM templates, Bicep, or Terraform, hands-on experience with CI/CD pipelines and deployment automation, and a proven track record of working independently on complex tasks while effectively collaborating with cross-functional teams. Preferred qualifications that set you apart include strong documentation, fixing, and communication skills, Microsoft Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect), and experience mentoring junior engineers or leading technical workstreams. At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives because we believe that great ideas come from great teams. Our commitment to ongoing career development and cultivating an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams working together are key to driving growth and delivering business results.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As an Azure Cloud Developer with approximately 5 years of experience, you will be responsible for developing and maintaining cloud-based applications utilizing Microsoft Azure. Your primary focus will be on creating scalable solutions by leveraging various Azure services such as Azure App Services, Functions, Storage, and Azure SQL. You will also collaborate closely with cross-functional teams to integrate CI/CD pipelines and streamline deployment processes. Your key responsibilities will include: - Utilizing your expertise in Azure services like App Services, Functions, Storage, and SQL to develop efficient and reliable cloud-based applications. - Implementing CI/CD pipelines using tools such as Azure DevOps or GitHub Actions to automate the deployment process. - Demonstrating proficiency in infrastructure-as-code concepts, including ARM, Bicep, and YAML to manage and configure Azure resources effectively. - Having a foundational understanding of containerization technologies like Docker, with experience in AKS considered a plus. - Showcasing strong coding and debugging skills, particularly in .NET Core, to ensure the robustness and performance of developed applications. Overall, as an Azure Cloud Developer, you will play a crucial role in designing, building, and maintaining cutting-edge cloud solutions on the Microsoft Azure platform. Your ability to work collaboratively, adapt to evolving technologies, and deliver high-quality code will be essential for the success of our projects.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

As a Digital Product Engineering company, Nagarro is seeking a talented individual to join our dynamic and non-hierarchical work culture as a Data Engineer. With over 17500 experts across 39 countries, we are scaling in a big way and are looking for someone with 10+ years of total experience to contribute to our team. **Requirements:** - The ideal candidate should possess strong working experience in Data Engineering and Big Data platforms. - Hands-on experience with Python and PySpark is required. - Expertise with AWS Glue, including Crawlers and Data Catalog, is essential. - Experience with Snowflake and a strong understanding of AWS services such as S3, Lambda, Athena, SNS, and Secrets Manager are necessary. - Familiarity with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform is preferred. - Strong experience with CI/CD pipelines, preferably using GitHub Actions, is a plus. - Working knowledge of Agile methodologies, JIRA, and GitHub version control is expected. - Exposure to data quality frameworks, observability, and data governance tools and practices is advantageous. - Excellent communication skills and the ability to collaborate effectively with cross-functional teams are essential for this role. **Responsibilities:** - Writing and reviewing high-quality code to meet technical requirements. - Understanding clients" business use cases and converting them into technical designs. - Identifying and evaluating different solutions to meet clients" requirements. - Defining guidelines and benchmarks for Non-Functional Requirements (NFRs) during project implementation. - Developing design documents explaining the architecture, framework, and high-level design of applications. - Reviewing architecture and design aspects such as extensibility, scalability, security, design patterns, user experience, and NFRs. - Designing overall solutions for defined functional and non-functional requirements and defining technologies, patterns, and frameworks. - Relating technology integration scenarios and applying learnings in projects. - Resolving issues raised during code/review through systematic analysis of the root cause. - Conducting Proof of Concepts (POCs) to ensure suggested designs/technologies meet requirements. **Qualifications:** - Bachelors or master's degree in computer science, Information Technology, or a related field is required. If you are passionate about Data Engineering, experienced in working with Big Data platforms, proficient in Python and PySpark, and have a strong understanding of AWS services and Infrastructure-as-Code tools, we invite you to join Nagarro and be part of our innovative team.,

Posted 3 weeks ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 4 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

punjab

On-site

ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. We are dedicated to building Agentic Systems for AI Agents with Akira.ai, developing the Vision AI Platform with XenonStack.ai, and providing Inference AI Infrastructure for Agentic Systems through Nexastack.ai. THE OPPORTUNITY We are seeking an experienced Associate DevOps Engineer with 1-3 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES - Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across multiple cloud platforms (AWS, Azure, GCP). - Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. - Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. - Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). - Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. - Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. - Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. SKILLS REQUIREMENTS - 1-3 years of experience in DevOps or cloud infrastructure engineering. - Proficiency in cloud platforms like AWS, Azure, or GCP and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). - Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. - Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. - Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. - Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. - Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. - Strong understanding of Linux-based operating systems and cloud-based infrastructure management. - Bachelors degree in Computer Science, Information Technology, or related field. - 1-3 years of hands-on experience working in a DevOps or cloud engineering role. CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, there's nowhere better for you than XenonStack. Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI - Obsessed with Adoption: We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. - Obsessed with Simplicity: We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStack's Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.,

Posted 4 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a key member of the Cloud and Productivity Engineering Organisation at London Stock Exchange Group, you will be responsible for owning and delivering modern application solutions using containerization. Your role will be pivotal in driving innovation to meet business changes, enhance security measures, and align with the digital strategy. Your leadership in this role will be demonstrated by defining and implementing the Container strategy, standards, processes, methodologies, and architecture. You will collaborate closely with various teams, including Security, Engineering, and Identity, to develop solutions that best suit the project requirements. Key Responsibilities: - Drive the acceleration, adoption, and migration of applications to the public cloud by utilizing containerization as the core technology. - Analyze, design, and implement Container infrastructure solutions in alignment with LSEG standards and procedures. - Design and implement infrastructure processes such as service requests and capacity management for container platforms. - Monitor resource utilization rates, identify potential bottlenecks, and implement improvement points to enhance efficiency and savings. - Support knowledge management through documentation creation, maintenance, and improvement of solution design documents, knowledge articles, Wikis, and other artifacts. Manage the lifecycle of all Container platforms. - Develop long-term technical design and architecture for LSEG services, creating roadmaps for container platforms and peripherals. - Collaborate with the Group CISO and IT security teams to enhance security controls. - Define container strategy in collaboration with the container product team, establishing standards, blueprints, processes, and patterns. - Establish consistent architecture across all Digital platforms in collaboration with the Engineering community to meet LSEG's future technology needs. - Build relationships with cloud platform customers and engage with senior stakeholders up to C Level. - Act as an Agile "Product Owner" for the container product, ensuring feedback and learning are incorporated effectively. Candidate Profile / Key Skills: - Demonstrated technical expertise in infrastructure technologies. - Experience in SDLC, Continuous Integration & Delivery, Application Security, Quality Assurance, Istio, Serverless, Kubernetes, Agile, Lean, Product Development, DevSecOps, Continuous Change, software engineering with exposure to high-performance computing, big data analytics, machine learning. - Proficiency in multiple programming languages such as C, C++, C#, Java, Rust, Go, Python. - Strong background working in a senior technology role within a public cloud environment, ideally with AWS or Azure. - Ability to drive technological and cultural change towards rapid technology adoption and absorption. - Team player with a track record of delivering successful business outcomes. - Excellent planning and communication skills, capable of leading conversations with development and product teams. - Thrives in a fast-paced environment, with strong influencing and negotiation skills. - Experience in team building, coaching, and motivating global teams. - Exposure to modern-day programming languages, PaaS/SaaS/IaaS, and best practices in public cloud. - Proficiency in operating systems, network infrastructures, RDBMS, infrastructure-as-code software, and continuous integration/continuous deployment pipelines. - Deep knowledge of Azure, AWS, and GCP services. Join London Stock Exchange Group, a trusted expert in global financial markets, and play a vital role in driving financial stability and sustainable growth through innovative technology solutions. Be part of a diverse and collaborative culture that values individuality and encourages new ideas while committing to sustainability. Together, we aim to support sustainable economic growth and the just transition to net zero, creating inclusive economic opportunities for all.,

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

We are looking for a skilled MLOps Engineer with 2-3 years of experience to join our team. As an MLOps Engineer, you will collaborate with data scientists, software engineers, and IT teams to ensure smooth deployment, scaling, and monitoring of machine learning models. Your responsibilities will include designing, developing, and maintaining automated pipelines for continuous integration and deployment (CI/CD) of machine learning models. You will manage model versioning, deployment, and monitoring in production environments and optimize the performance and scalability of machine learning models post-deployment. Collaboration with data science teams will be essential to improve model reproducibility, experiment tracking, and data workflows. You will implement monitoring, alerting, and logging solutions to ensure model performance and detect anomalies in production. Managing and scaling infrastructure required for model training and inference, whether on-premise or cloud, will be part of your daily tasks. Working closely with DevOps teams to integrate MLOps practices seamlessly into existing development workflows is also a key responsibility. Implementing security and compliance practices for AI/ML pipelines, including data governance, will be crucial. Troubleshooting issues in production environments and ensuring high availability of models will also be part of your role. Qualifications: - Education: Bachelor's degree in Computer Science, Engineering, or a related field. - Experience: 2-3 years of hands-on experience in MLOps, DevOps, or related fields. - Experience with machine learning lifecycle management tools such as MLflow, Kubeflow, or TFX. - Strong knowledge of cloud platforms such as AWS, Google Cloud, or Azure (experience in setting up AI/ML services is a plus). - Proficiency in scripting and automation (Python, Bash, etc.). - Experience with containerization (Docker) and orchestration tools (Kubernetes). - Familiarity with CI/CD tools like Jenkins, CircleCI, or GitLab CI for deploying machine learning models. - Knowledge of version control systems (e.g., Git) and infrastructure-as-code (e.g., Terraform, CloudFormation). - Understanding of monitoring and logging frameworks (e.g., Prometheus, Grafana, ELK stack). - Familiarity with data engineering tools (e.g., Apache Airflow, Kafka) is a plus.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

You have an exciting opportunity to join as a DevSecOps in Sydney. As a DevSecOps, you should have 3+ years of extensive Python proficiency and 3+ years of Java Experience. Your role will also require extensive exposure to technologies such as Javascript, Jenkins, Code Pipeline, CodeBuild, and AWS" ecosystem including AWS Well Architected Framework, Trusted Advisor, GuardDuty, SCP, SSM, IAM, and WAF. It is essential for you to have a deep understanding of automation, quality engineering, architectural methodologies, principles, and solution design. Hands-on experience with Infrastructure-As-Code tools like CloudFormation and CDK will be preferred for automating deployments in AWS. Moreover, familiarity with operational observability, including log aggregation, application performance monitoring, deploying auto-scaling and load-balanced / Highly Available applications, and managing certificates (client-server, mutual TLS, etc) is crucial for this role. Your responsibilities will include improving the automation of security controls, working closely with the consumer showback team on defining processes and system requirements, and designing and implementing updates to the showback platform. You will collaborate with STO/account owners to uplift the security posture of consumer accounts, work with the Onboarding team to ensure security standards and policies are correctly set up, and implement enterprise minimum security requirements from the Cloud Security LRP, including Data Masking, Encryption monitoring, Perimeter protections, Ingress / Egress uplift, and Integration of SailPoint for SSO Management. If you have any questions or need further clarification, feel free to ask.,

Posted 1 month ago

Apply

15.0 - 19.0 years

0 Lacs

haryana

On-site

The Vice President of DevOps & SRE is a senior leadership role that holds the responsibility of driving platform reliability, secure operations, and DevOps excellence across the enterprise. This position involves integrating site reliability engineering practices with scalable DevOps automation and ensuring a robust cybersecurity posture. As the VP, you will lead high-performing teams, define technology strategy, manage infrastructure, and safeguard systems and data to support business growth and digital innovation. Your key responsibilities will include: - Leading enterprise-wide DevOps adoption and continuous delivery transformation. - Implementing and optimizing CI/CD pipelines, infrastructure-as-code (IaC), and cloud-native architectures. - Championing automation in deployment, monitoring, and infrastructure provisioning. - Having experience with containerization (Kubernetes, Docker), service mesh, and serverless environments. - Fostering collaboration between development, operations, and QA for rapid, reliable releases. In addition, you will be responsible for: - Establishing and leading the SRE function to ensure system reliability, scalability, and performance. - Defining and monitoring SLAs, SLOs, and SLIs for critical applications and services. - Driving incident management, root cause analysis, and postmortem culture. - Developing and deploying observability strategies utilizing tools like Prometheus, Grafana, Zabbix, or enterprise tools such as New Relic, Dynatrace, Splunk, etc. Furthermore, the role will involve: - Building and mentoring cross-functional teams across DevOps and SRE. - Partnering with engineering, product, and business leaders to align technical initiatives with organizational goals. - Developing and managing departmental budgets, tools, and vendor relationships. - Reporting on KPIs, operational health, security posture, and risk to the executive leadership team. Qualifications: Required qualifications for this role include: - Bachelors or Masters in Computer Science, Engineering, or a related field. - 15+ years of experience in IT/engineering with at least 5+ years in leadership roles. - Proven experience in implementing DevOps, SRE, and security practices at scale. - Hands-on expertise with AWS, Azure, or GCP; CI/CD tools; and SRE observability platforms.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

WPP is a creative transformation company that utilizes the power of creativity to create better futures for its people, planet, clients, and communities. Joining WPP means becoming part of a global network of over 100,000 talented individuals dedicated to delivering exceptional work for clients worldwide. With operations in more than 100 countries and headquarters in New York, London, and Singapore, WPP is a prominent player in marketing services, boasting deep AI, data, and technology capabilities, as well as unparalleled creative talent. A significant portion of the Fortune Global 500 companies are among our clients. Our people are fundamental to our success. We are committed to cultivating a culture of creativity, inclusivity, and continuous learning, attracting and nurturing the brightest talents, and offering exciting career prospects that facilitate personal growth. At WPP, technology is central to our operations, and WPP IT is on a mission to empower everyone to collaborate, create, and thrive. The IT division of WPP is currently undergoing a substantial transformation to modernize work processes, transition to cloud and micro-service-based architectures, promote automation, digitalize colleague and client experiences, and extract insights from WPP's extensive data resources. WPP Media, a leading media investment company globally, is responsible for managing over $63 billion in annual media investments through agencies such as Mindshare, MediaCom, Wavemaker, Essence, m/SIX, Xaxis, and Choreograph. The WPP Media IT team within WPP IT serves as the technology solutions partner for the agencies under WPP Media, overseeing end-to-end change delivery, managing the technology life cycle, and driving innovation. We are currently seeking a proficient Data Operations Lead to lead our newly established Data Integration & Operations team in Chennai. In this role, you will be responsible for defining operational strategies and overseeing day-to-day delivery. The team, part of the global Data & Measure function, focuses on ensuring the efficient, reliable, and consistent operation of our data products across various platforms and markets. Your responsibilities will include: - Taking technical ownership and leading a team responsible for data integration, ingestion, orchestration, and platform operations. - Establishing and managing automated data pipelines using tools such as Azure Data Factory, GCP Dataflow/Composer, or equivalent. - Implementing platform-wide monitoring, logging, and alerting mechanisms. - Managing cloud environments, including security, access control, and deployment automation. - Developing standard operating procedures, runbooks, onboarding guides, and automation patterns. - Ensuring scalable and repeatable practices across all supported data products. - Defining deployment frameworks and templates for integration. - Setting up SLAs, incident workflows, and escalation models. - Proactively identifying and addressing operational risks in cloud-based data platforms. - Collaborating with development and product teams to facilitate the smooth transition from development to operations. - Mentoring and leading a growing team in Chennai, shaping the team's operating model, priorities, and capabilities, and serving as a subject matter expert and escalation point for technical operations. Required Skills: - 7+ years of experience in data operations, platform engineering, or data engineering. - Proficiency in Azure and/or GCP environments. - Strong understanding of cloud-native data pipelines, architecture, and security. - Skills in orchestration (e.g., ADF, Dataflow, Airflow), scripting (Python, Bash), and SQL. - Familiarity with DevOps practices, CI/CD, and infrastructure-as-code. - Demonstrated experience in managing production data platforms and support. - Ability to design operational frameworks from scratch. - Experience in leading technical teams, including task prioritization, mentoring, and delivery oversight. Preferred Skills: - Experience with tools like dbt, Azure Synapse, BigQuery, Databricks, etc. - Exposure to BI environments (e.g., Power BI, Looker). - Familiarity with global support models and tiered ticket handling. - Experience in documentation, enablement, and internal tooling. If you embody openness, optimism, and a drive for excellence, WPP offers: - A culture that promotes inclusivity, collaboration, and the exchange of diverse ideas. - Opportunities to utilize creativity, technology, and talent to create brighter futures for people, clients, and communities. - Challenging and stimulating work that encourages creative problem-solving. - A hybrid work model that fosters creativity, collaboration, and connection, with teams in the office around four days a week. Accommodations and flexibility can be discussed with the hiring team during the interview process. WPP is committed to being an equal opportunity employer, considering all applicants without discrimination. We strive to create a culture of respect, inclusivity, and equal opportunities for career advancement for all individuals. For more information on how we process your information, please refer to our Privacy Notice.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

noida, uttar pradesh

On-site

The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. As a Senior Backend Engineer, you will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 7+ years of backend development experience, with a strong focus on Kotlin - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About the role and key responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. GlobalLogic offers a culture of caring, learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust organization where integrity is key. Join us to be part of a trusted digital engineering partner creating innovative digital products and experiences.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a Backend Engineer (.NET), you will have the opportunity to work within the platform engineering team to develop scalable, high-performance, and multi-tenant solutions. Your role will involve enhancing code quality, making critical architecture decisions, and enabling multi-tenant capabilities in a data-driven environment. With over 6 years of hands-on experience in backend development, you will focus on performance, scalability, security, and maintainability. Your strong proficiency in C# and .NET Core will be crucial for developing RESTful APIs and microservices. You will be responsible for driving code quality by ensuring adherence to best practices, design patterns, and SOLID principles. Experience with cloud platforms such as Google Cloud Platform and Azure is essential for implementing cloud-native and multi-tenant best practices. You will also need hands-on experience with containerization using Docker and orchestration with Kubernetes and Helm. Your strong focus on non-functional requirements (NFRs) will include tenant isolation, security boundaries, performance optimization, scalability across tenants, and comprehensive observability for tenant-specific insights. Implementing unit testing, integration testing, and automated testing frameworks will be part of your responsibilities. Proficiency in CI/CD automation, GitOps workflows, and Infrastructure-as-Code using tools like Terraform, Helm, or similar will also be required. For qualifications, you should have a strong proficiency in C#, .NET Core, and RESTful API development. Experience with asynchronous programming, concurrency control, and event-driven architecture is important. Hands-on experience with Docker, Kubernetes (K8s), performance tuning, security best practices, and observability is also necessary. Exposure to multi-tenant architectures with a strong understanding of NFRs, including tenant isolation strategies, performance profiling, shared vs. isolated resource models, and scalable, resilient design patterns will be beneficial for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be the Platform Services Lead for Data Platform and Standards at our company, taking on the responsibility for managing end-to-end service delivery and platform operations for core data governance technologies. This includes overseeing Data Quality, Catalogue, Privacy, Lineage, and Retention services. Your role will involve defining and implementing a service resilience strategy, covering aspects such as monitoring, alerting, capacity management, disaster recovery, and failover design. As the Platform Services Lead, you will be tasked with establishing and enforcing SLAs, KPIs, and operational performance metrics across the platform estate. Collaboration with Engineering, IT Service Owners (ITSO), and Cybersecurity teams will be essential to embed observability, DevSecOps, and compliance practices within the platform. Driving the adoption of self-healing mechanisms, automated remediation processes, and infrastructure-as-code practices will be part of your responsibilities to enhance uptime and reduce operational overhead. Additionally, you will lead incident and problem management processes, which includes conducting root cause analysis, managing stakeholder communications, and implementing corrective actions as needed. Ensuring platform change management and maintaining environment stability in alignment with regulatory and audit requirements will also fall under your purview. This role requires a seasoned professional with a strong background in platform services and data governance technologies, as well as a proactive approach to driving operational excellence.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are a highly skilled Azure PySpark Solution Architect responsible for designing and implementing scalable data solutions on Microsoft Azure. Your expertise in Azure services, PySpark, and solution architecture will ensure efficient data processing and analytics workflows for enterprise applications. Your key responsibilities include designing and implementing end-to-end data solutions using Azure Data Services and PySpark. You will develop high-performance ETL pipelines leveraging Azure Databricks, Azure Data Factory, and Synapse Analytics. Additionally, you will architect scalable, secure, and cost-efficient cloud solutions that align with business objectives. Collaborating with data engineers, data scientists, and business stakeholders, you will define technical requirements and solution roadmaps. Optimizing big data processing and ensuring data governance, security, and compliance standards are met will also be part of your role. Providing technical leadership and best practices for Azure and PySpark-based data solutions, conducting performance tuning and troubleshooting for PySpark workloads, and ensuring seamless integration with third-party tools, APIs, and enterprise systems are essential responsibilities. You must have expertise in Azure Cloud Services (Azure Databricks, Data Factory, Synapse, Azure Storage, ADLS) and strong hands-on experience with PySpark for data processing and transformation. A deep understanding of solution architecture, including microservices, event-driven architectures, and cloud-native patterns is required. Experience with SQL, NoSQL databases, and data modeling in Azure, as well as knowledge of CI/CD pipelines, DevOps, and Infrastructure-as-Code (Terraform, ARM templates, or Bicep) are essential skills. Strong problem-solving skills and the ability to optimize large-scale data workloads are crucial. Excellent communication and stakeholder management skills are necessary. Good-to-have skills include familiarity with streaming technologies such as Kafka, Event Hubs, and Spark Streaming, knowledge of containerization technologies (Docker, Kubernetes), experience with machine learning frameworks and integration with big data pipelines, familiarity with Agile/Scrum methodologies and related tools (JIRA, Confluence), and understanding of data security frameworks and compliance regulations (GDPR, HIPAA, etc.). Your primary skills include Azure, PySpark, and Solution Architecture.,

Posted 1 month ago

Apply

8.0 - 14.0 years

0 - 0 Lacs

noida, uttar pradesh

On-site

As a Tech Lead at VIR Consultants, located in Noida, India, you will play a crucial role in leading our in-house IT team. Your primary responsibility will involve gathering internal technical requirements and overseeing the entire project lifecycle to ensure the robustness, scalability, and efficiency of our platforms. To excel in this leadership position, you must possess extensive expertise in full stack development, particularly in Node.js, Vue/React, and Dart/Flutter, as well as a deep understanding of DevOps and continuous deployment practices. Your key responsibilities will include leading, mentoring, and managing the IT and development team, setting clear objectives, and fostering a collaborative work environment. You will also be tasked with analyzing requirements, designing cutting-edge solutions, overseeing software development from planning to maintenance, and building scalable applications using various technologies. In addition to technical duties, you will be expected to implement and maintain robust DevOps practices, establish CICD pipelines, evaluate and integrate emerging technologies, and act as the primary technical contact for internal stakeholders. Your role will also involve translating business requirements into technical specifications and communicating project progress and challenges to the senior management team. To qualify for this role, you should hold a Bachelor's degree in Computer Science or a related field, along with 8 to 14 years of experience in full stack development and IT leadership. Proficiency in Node.js, Vue.js, React, and Dart/Flutter is essential, as well as a solid understanding of DevOps practices, microservices architecture, RESTful APIs, and containerization technologies. Apart from technical competencies, strong soft skills such as leadership, communication, problem-solving, and the ability to manage multiple projects concurrently are crucial for success in this position. In return, we offer a competitive salary package ranging from 14 to 26 Lakh per annum, a collaborative work environment focused on growth and innovation, and the opportunity to lead technological advancements within the organization. If you are looking for a challenging yet rewarding opportunity to drive innovation and work on industry-leading projects, VIR Consultants could be the perfect place for you. Join us in our vibrant office environment in Noida and be a part of our dynamic team as we strive towards excellence in IT and development.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

Perforce is a community of collaborative experts, problem solvers, and possibility seekers who believe work should be both challenging and fun. We are proud to inspire creativity, foster belonging, support collaboration, and encourage wellness. At Perforce, you'll work with and learn from some of the best and brightest in business. Before you know it, you'll be in the middle of a rewarding career at a company headed in one direction: upward. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce Software, Inc. is trusted by the world's leading brands to deliver solutions for the toughest challenges. The best run DevOps teams in the world choose Perforce. Position Summary: As a Sr. SecOps Engineer at Perforce, you will be responsible for designing and optimizing the security operations for Perforce's SaaS product portfolio. Your key role involves driving the design and implementation of automated tools and technologies to ensure the security, reliability, and high availability of production and CI/CD environments, applications, and infrastructure. You will lead efforts to establish SecOps best practices across the organization, ensuring adherence to the highest security standards in all environments. Responsibilities: - Develop and implement vulnerability management practices using tools like Qualys, Lacework, Prisma, and Mend (SAST and SCA). - Manage operations/cadence in Vulnerability management, SIEM, and CSPM. - Lead efforts to ensure security incident and event management (SIEM) from code repositories to operating systems, VMs, databases, networks, and applications. - Automate security processes and workflows across CI/CD pipelines, leveraging infrastructure-as-code (IaC) and security automation tools to improve efficiency. - Drive the implementation of security hardening best practices across the infrastructure layers. - Implement and maintain secret scanning tools across CI/CD pipelines to detect and mitigate the exposure of sensitive data. - Advocate and implement security best practices in agile SDLC methodologies and DevSecOps workflows. - Collaborate closely with Developer and DevOps teams to embed security at every stage of development and deployment processes. - Lead and maintain security sprint boards, monitor tasks, and manage risks via Jira and other collaboration tools. - Schedule and run monthly SecOps cadence meetings to report on the organization's security posture, discuss ongoing projects, and address security incidents and mitigations. - Prepare and present comprehensive documentation and reports on security incidents, vulnerability assessments, and audit findings to stakeholders. - Assist with incident response planning, including the triage, investigation, and remediation of security incidents. - Stay updated on the latest security threats, tools, and methodologies to continuously improve security frameworks and policies. Requirements: - Bachelor's or master's degree in computer science, Information Security, Engineering, or a related field. - 7+ years of experience in cybersecurity, security operations, or a similar role in a SaaS/cloud environment. - Strong hands-on experience with security automation tools, vulnerability management tools, and infrastructure-as-code practices. - Proficiency in automating vulnerability scanning, patch management, and compliance monitoring processes across hybrid cloud environments. - Strong understanding of Cloud Security Posture Management (CSPM) tools and practices. - Experience with SIEM tools, secret management, and scanning in CI/CD environments. - Familiarity with hardening techniques across various platforms and driving security sprint boards. - Excellent presentation, communication, and documentation skills. - Knowledge of infrastructure-as-code frameworks and experience in automating security configurations. - Strong problem-solving skills and the ability to work under pressure in a fast-paced environment. - Continuous desire to learn and stay updated on the latest cybersecurity practices and threats. Join our team at Perforce! If you are passionate about technology, want to work with talented individuals globally, and make an impact, apply today!,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Principal Engineer - Site Reliability Engineering (SRE) within the Digital Business team at Sonyliv, you will play a crucial role in ensuring the availability, scalability, and performance of our cutting-edge OTT platform. With a global user base, we are dedicated to providing seamless, high-quality streaming experiences to our audience. Your primary responsibility will be to design, build, and maintain a robust and scalable infrastructure that supports our OTT platform. Leveraging your extensive SRE experience and developer mindset, you will lead initiatives to enhance system reliability and operational efficiency. You will take full ownership of system operations, ensuring application and infrastructure reliability while demonstrating a strong support mindset to address critical incidents, even outside regular business hours. Additionally, you will collaborate closely with cross-functional teams to align goals and enhance operational excellence. Key responsibilities include managing full system ownership, developing tools and automation to improve reliability, responding to critical system issues promptly, designing and managing infrastructure solutions, driving observability best practices, and continuously improving system reliability and performance. To excel in this role, you should have at least 8 years of experience, a deep understanding of observability, and the ability to lead reliability initiatives across systems and teams. Strong technical proficiency in containers (Docker, Kubernetes), networking concepts, CDNs, infrastructure-as-code tools, cloud platforms, observability solutions, scripting/programming languages, and incident handling is essential. We are looking for a candidate with a passion for system reliability, scalability, and performance optimization, along with excellent communication, collaboration, and leadership skills. Your willingness to participate in a 24x7 on-call rotation and support critical systems during off-hours will be crucial for success in this role. Join us at Sony Pictures Networks to be part of a dynamic team that is shaping the future of entertainment in India. With leading entertainment channels and a promising streaming platform like Sony LIV, we are committed to creating a diverse and inclusive workplace where you can thrive and make a meaningful impact.,

Posted 1 month ago

Apply

10.0 - 15.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer at NEC Software Solutions (India) Private Limited, you will be part of a dynamic team working on innovative applications that utilize AI to enhance efficiency within the Public Safety sector. With over 10-15 years of experience, your primary expertise in Python and React will be crucial in developing new functionalities for an AI-enabled product roadmap. Your role will involve collaborating closely with the product owner and Solution Architect to create robust, market-ready software products meeting the highest engineering and user experience standards. Your responsibilities will include writing reusable, testable, and efficient Python code, working on Document and image processing libraries, API Gateway, backend CRUD operations, and cloud infrastructure preferably AWS. Additionally, your expertise in TypeScript & React for frontend development, designing clean user interfaces, and backend programming for web applications will be instrumental in delivering software features from concept to production. Your personal attributes such as problem-solving skills, inquisitiveness, autonomy, motivation, integrity, and big picture awareness will play a vital role in contributing to the team's success. Moreover, you will have the opportunity to develop new skills, lead technical discussions, and actively engage in self-training and external training sessions to enhance your capabilities. As a Senior Full Stack Engineer, you will actively participate in discussions with the Product Owner and Solution Architect, ensure customer-centric development, oversee the software development lifecycle, and implement secure, scalable, and resilient solutions for NECSWS products. Your role will also involve providing support for customers and production systems to ensure seamless operations. The ideal candidate for this role should hold a graduate degree, possess outstanding leadership qualities, and have a strong background in IT, preferably with experience in public sector or emergency services. If you are someone who thrives in a challenging environment, enjoys working with cutting-edge technologies, and is passionate about delivering high-quality software solutions, we invite you to join our team at NEC Software Solutions (India) Private Limited.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies