Jobs
Interviews

2943 Datadog Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Zeotap Founded in Berlin in 2014, Zeotap started with a mission to provide high-quality data to marketers. As we evolved, we recognized a greater challenge: helping brands create personalized, multi-channel experiences in a world that demands strict data privacy and compliance. This drive led to the launch of Zeotap’s Customer Data Platform (CDP) in 2020—a powerful, AI-native SaaS suite built on Google Cloud that empowers brands to unlock and activate customer data securely. Today, Zeotap is trusted by some of the world’s most innovative brands, including Virgin Media O2, Amazon, and Audi, to create engaging, data-driven customer experiences that drive better business outcomes across marketing, sales, and service. With an unique background in high-quality data solutions, Zeotap is a leader in the European CDP market, empowering enterprises with a secure, privacy-first solution to harness the full potential of their customer data. About the Role: As a Senior Solutions Engineer - Support, you will play a key role in providing technical expertise and driving customer success for our enterprise clients using Zeotap’s SaaS platform. In this senior role, you’ll not only resolve complex issues and ensure seamless integrations but will also take on additional responsibilities like managing escalations, leading client-facing calls, mentoring the support team, and driving process improvements. You will collaborate across teams — Engineering, Product, Sales, and Customer Success — ensuring customers have an exceptional experience and derive maximum value from Zeotap’s solutions.This role is ideal for someone with a strong technical background, an ownership mindset, and a passion for delivering excellent customer service, with a proven ability to mentor and drive team efficiency in a high-growth, fast-paced environment. Responsibilities: Client-Facing Expertise: Act as the primary technical advisor and product expert for enterprise customers, ensuring they receive high-quality support and expert guidance. Lead and manage client-facing calls, providing timely and effective resolutions to complex technical issues. Build and maintain strong relationships with customers to understand their business needs and deliver tailored support, ensuring maximum value from Zeotap’s platform. Actively engage with clients during escalations and ensure customer satisfaction through effective issue resolution. Escalation Management: Own and manage escalated issues from customers, ensuring that issues are resolved promptly within SLAs. Provide detailed context and collaborate closely with internal teams to resolve complex cases, including technical configurations and root cause analysis. Establish best practices for escalation management, ensuring all team members follow a consistent and effective process. Team Mentoring & Leadership: Mentor junior members of the support team, providing guidance on technical problem-solving, customer interactions, and escalation management. Foster a collaborative team environment, encouraging knowledge sharing, continuous improvement, and high performance. Take ownership of process improvements within the support function, developing and implementing new procedures, tools, and training to optimize team effectiveness. Process Building & Continuous Improvement: Lead the development of internal knowledge bases, documentation, and troubleshooting guides to empower customers and internal teams. Drive improvements in internal support processes, ensuring efficient and consistent customer experiences while reducing response and resolution times. Proactively identify trends and root causes of recurring issues and collaborate with engineering and product teams to address them at the source. Reporting & Metrics: Take ownership of support metrics, including ticket volume, resolution times, customer satisfaction (CSAT), and technical issue trends. Prepare and deliver regular performance reports to leadership, providing actionable insights and recommendations for improvements. Collaboration with Internal Teams: Work closely with Engineering, Product, Sales, and Customer Success teams to align technical solutions with customer needs. Collaborate on cross-functional projects and initiatives to continuously enhance the customer experience. Adhere to Security and Compliance Standards: Follow Zeotap’s security and privacy policies, ensuring that customer data is handled in compliance with internal guidelines and industry standards. Requirements: 4+ years of experience in a technical support, solutions engineering, or customer success engineering role within a SaaS or enterprise software environment. Proven experience managing escalations and providing high-quality support for large-scale enterprise customers. Demonstrated success in mentoring and leading teams, fostering collaboration, and driving process improvements. SaaS & Cloud Application Support: Expertise with SaaS applications and cloud-based infrastructure (particularly Google Cloud Platform, but any cloud experience is valuable). API & Integrations: Deep experience with RESTful APIs, troubleshooting integrations, and providing solutions for complex customer issues. SQL & Querying: Strong knowledge of SQL and ability to write and optimize queries for troubleshooting data-related issues. Scripting & Automation: Experience with scripting (Python, Bash, Javascript, or Java) to automate workflows and resolve technical issues. Monitoring & Troubleshooting: Familiarity with cloud monitoring tools (e.g., Stackdriver, BigQuery, Datadog, Kibana, Grafana, Splunk). Exceptional verbal and written communication skills, able to effectively communicate complex technical concepts to both technical and non-technical stakeholders. Strong relationship-building skills, with the ability to manage and maintain customer relationships while aligning internal teams to solve customer challenges. Excellent analytical and troubleshooting skills with the ability to quickly identify root causes and resolve complex technical issues. Ability to manage multiple priorities simultaneously while maintaining a high standard of customer satisfaction. A passion for delivering customer satisfaction and high-quality support. Proven ability to handle customer issues under pressure, maintaining a positive customer experience in challenging situations. Strong sense of ownership, accountability, and initiative. Willingness to take responsibility for the success of the support function and customer experience. Willingness to work across different time zones to support customers, particularly during EU working hours. Nice-to-Have: Cloud Certifications: Certifications such as Google Cloud Professional Cloud Architect or similar are beneficial. Technical Knowledge: Experience with Kubernetes, Docker, or Terraform for managing cloud-based infrastructure Industry Knowledge: Familiarity with Ad-tech, Mar-tech, or similar industries, especially in areas related to privacy, data security, and cloud-based analytics. Measures of Success: Customer Satisfaction (CSAT): High CSAT scores based on customer feedback, demonstrating your ability to solve problems effectively. Escalation Management: High success rate in managing and resolving escalated issues within SLAs. Team Growth & Development: Successful mentoring of junior team members and leadership in process improvements. SLA Adherence: Consistent adherence to SLA targets for response and resolution times. Team Performance: High team performance based on metrics such as ticket resolution time, first-call resolution rate, and customer satisfaction. Knowledge Base Contribution: Regular contributions to the internal knowledge base, improving team efficiency and customer self-service. Proactive Monitoring Initiatives: Active involvement in identifying and addressing potential issues before they escalate. Demonstrated success in setting up and managing proactive monitoring systems to prevent disruptions and optimize system performance. What do we offer: Competitive compensation and attractive perks Health Insurance coverage Flexible working support, guidance and training provided by a highly experienced team Fast paced work environment Work with very driven entrepreneurs and a network of global senior investors across telco, data, advertising, and technology Zeotap welcomes all – we are equal employment opportunity & affirmative action employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Interested in joining us? We look forward to hearing from you!

Posted 2 weeks ago

Apply

4.0 - 5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Job Title: DevOps Engineer Location: Jaipur Experience: 4 - 5 Years Employment Type: Full-time Job Summary We are seeking a highly skilled and proactive DevOps Engineer with 4 - 5 years of hands-on experience to join our growing engineering team. The ideal candidate will have a deep understanding of CI/CD pipelines, cloud platforms (AWS/Azure/GCP), infrastructure as code, containerization, monitoring, and automation. You will play a key role in ensuring the scalability, reliability, and security of our systems and applications. Key Responsibilities Design, implement, and maintain scalable CI/CD pipelines. Manage cloud infrastructure using Infrastructure-as-Code tools (Terraform, CloudFormation, etc.). Automate deployment processes and ensure zero-downtime releases. Implement monitoring and alerting systems using tools like Prometheus, Grafana, ELK, or Datadog. Collaborate with development, QA, and security teams to optimize delivery workflows. Manage and maintain container orchestration platforms (e.g., Kubernetes, Docker Swarm). Ensure system availability, security, and performance through proactive monitoring and troubleshooting. Conduct system architecture reviews and capacity planning. Mentor junior team members and contribute to best practices documentation. Nice To Have Certifications in AWS, Azure, or Kubernetes (CKA/CKAD). Experience with serverless architectures and cloud-native services. Exposure to Agile/Scrum methodologies and DevSecOps practices. Why Join Us Work on cutting-edge cloud infrastructure projects. Collaborative and innovative engineering culture. Opportunities for growth and upskilling. Flexible work arrangements and competitive compensation. Skills: cloudformation,gcp,aws,containerization,docker swarm,prometheus,devops,infrastructure as code,azure,ci/cd pipelines,elk,terraform,infrastructure,grafana,automation,datadog,kubernetes,monitoring

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

You will be part of McDonald's Corporation, a global company with presence in over 100 countries, offering corporate opportunities in Hyderabad. As a Data Product Engineering SRE, your primary responsibility will be to develop and maintain business-facing data products and analytics platforms, ensuring their reliability and performance. By combining your product engineering skills with data expertise and site reliability practices, you will contribute to delivering exceptional user experiences and driving product adoption. Your key responsibilities will include building and maintaining business-facing data products such as dashboards, analytics APIs, and reporting platforms. You will implement data product features to enhance user experience, create and maintain data APIs with proper authentication and performance optimization, and build automated testing frameworks for data product functionality. Monitoring and maintaining SLA compliance for data product availability and performance, establishing alerting systems for issues affecting user experience, and responding to and resolving data product incidents will also be part of your role. Additionally, you will be responsible for conducting root cause analysis, implementing preventive measures for data product failures, operating and maintaining data pipelines, implementing data validation and quality checks for customer-facing data products, and monitoring data product performance from an end-user perspective. This will involve implementing user analytics and tracking, participating in user feedback collection, and translating insights into technical improvements. Collaborating with Data Engineering teams and Frontend Engineers will be essential to ensure reliable data flow into product features and optimize database queries for interactive analytics workloads. To qualify for this role, you should have at least 4 years of experience in product engineering, data engineering, or SRE roles, with a minimum of 2 years building business-facing applications or data products. Proficiency in Python, JavaScript/TypeScript, and SQL, experience with data visualization tools and frameworks, and hands-on experience with analytics databases are required. Knowledge of AWS or GCP data and compute services, application monitoring tools, product development lifecycle, and user-centered design principles are also essential. Experience with product analytics tools, optimizing application performance, troubleshooting user-reported issues, on-call responsibilities, incident response procedures, and setting up monitoring and alerting systems will be beneficial for this role. This is a full-time position based in Hyderabad, India, with a hybrid work mode.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced End User Computing Specialist with over 5 years of dedicated experience in End User Computing (EUC) and a deep passion for ensuring the peak performance, security, and efficiency of enterprise device environments across both Windows and macOS platforms. Your role is critical in managing and optimizing organizational devices, ensuring robust security, and proactive IT operations. As an End User Computing Specialist, you will be responsible for overseeing the comprehensive management of enterprise-level end-user devices, including deployment, configuration, and lifecycle management for both Windows and macOS environments. You will design and implement robust strategies for application patch management, conduct regular fleet health checkups, focus on performance optimization, drive stringent security compliance, develop insightful dashboards for reporting, and utilize data for proactive IT operations and troubleshooting support. You must have a minimum of 5+ years of hands-on experience in End User Computing (EUC) and enterprise device management. Proven expertise in managing large-scale Windows and macOS environments, extensive experience with Application Patch Management strategies and tools, and demonstrable experience with EUC management tools such as Nexthink, Axonius, JumpCloud, Automox, CyberArk EPM, and DataDog are essential. Strong understanding of device performance optimization, security best practices, data-driven dashboards, problem-solving skills, and excellent communication and collaboration abilities in a remote team environment are required for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology, you will play a crucial role in driving innovation and modernizing complex and mission-critical systems. Your primary responsibility will be to solve intricate business problems by providing simple and effective solutions through code and cloud infrastructure. You will configure, maintain, monitor, and optimize applications and their associated infrastructure while continuously improving existing solutions. Your expertise in end-to-end operations, availability, reliability, and scalability will make you a valuable asset to the team. You will guide and support others in designing appropriate solutions and collaborate with software engineers to implement deployment strategies using automated continuous integration and continuous delivery pipelines. Your role will also involve designing, developing, testing, and implementing availability, reliability, and scalability solutions for applications. Additionally, you will be responsible for implementing infrastructure, configuration, and network as code for the applications and platforms under your purview. Collaboration with technical experts, stakeholders, and team members will be essential in resolving complex issues. You will utilize service level indicators and objectives to proactively address issues before they impact customers. Furthermore, you will support the adoption of site reliability engineering best practices within your team to ensure operational excellence. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 3 years of applied experience. Proficiency in site reliability principles and experience in implementing site reliability within applications or platforms is required. You should be adept in at least one programming language like Python, Java/Spring Boot, or .Net. Knowledge of software applications and technical processes in disciplines such as Cloud, AI, or Android is also essential. Experience in observability, continuous integration, continuous delivery tools, container technologies, networking troubleshooting, and collaboration within large teams is highly valued. Your proactive approach to problem-solving, eagerness to learn new technologies, and ability to identify innovative solutions will be crucial in this role. Preferred qualifications include experience in the banking or financial domain.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Auditor, Technology at LegalZoom, you will be an impactful member of the internal audit team, assisting in achieving the department's mission and objectives. Your role will involve evaluating technology risks in a dynamic environment, assessing the design and effectiveness of internal controls over financial reporting, and ensuring compliance with operational and regulatory requirements. You will document audit procedures and results following departmental standards and execute within agreed timelines. Additionally, you will provide advisory support to stakeholders on internal control considerations, collaborate with external auditors when necessary, and focus on continuous improvement of the audit department. Your commitment to integrity and ethics, coupled with a passion for the internal audit profession and LegalZoom's mission, are essential. Ideally, you hold a Bachelor's degree in computer science, information systems, or accounting, along with 3+ years of experience in IT internal audit and Sarbanes-Oxley compliance, particularly in the technology sector. Previous experience in a Big 4 accounting firm and internal audit at a public company would be advantageous. A professional certification such as CISA, CIA, CRISC, or CISSP is preferred. Strong communication skills, self-management abilities, and the capacity to work on multiple projects across different locations are crucial for this role. Familiarity with technologies like Oracle Cloud, AWS, Salesforce, Azure, and others is beneficial, along with reliable internet service for remote work. Join LegalZoom in making a difference and contributing to the future of accessible legal advice for all. LegalZoom is committed to diversity, equality, and inclusion, offering equal employment opportunities to all employees and applicants without discrimination based on any protected characteristic.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking to hire an Azure Cloud Engineer to join their team in Bangalore, Karnataka, India. As an Azure Cloud Engineer, you will be responsible for working in the Banking Domain as an Azure consultant. You should hold a Bachelors/Masters degree in Computer Science or Data Science, along with 5 to 8 years of experience in software development and data structures/algorithms. The ideal candidate will have 5 to 7 years of experience with programming languages such as Python or JAVA, database languages like SQL, and no-sql. Additionally, you should have 5 years of experience in developing large-scale platforms, distributed systems or networks, and be familiar with compute technologies and storage architecture. A strong understanding of microservices architecture is essential for this role. Experience in building AKS applications on Azure, as well as a deep understanding of Kubernetes for availability and scalability of applications in Azure Kubernetes Service, is required. You should also have experience in building and deploying applications with Azure using third-party tools like Docker, Kubernetes, and Terraform. The role will involve working with AKS clusters, VNETs, NSGs, Azure storage technologies, Azure container registries, etc. Good understanding of building Redis, ElasticSearch, and MongoDB applications is preferred, along with experience with RabbitMQ. An end-to-end understanding of ELK, Azure Monitor, DataDog, Splunk, and logging stack is beneficial. Candidates should have experience with development tools, CI/CD pipelines such as GitLab CI/CD, Artifactory, Cloudbees, Jenkins, Helm, Terraform, etc. Understanding of IAM roles on Azure and integration/configuration experience is required, preferably with experience working on Data Robot setup or similar applications on Cloud/Azure. Experience in functional, integration, and security testing, as well as performance validation, is also necessary for this role. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure globally, committed to helping organizations and society move confidently into the digital future.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

About Us: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It's how we've contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Analytics group is part of London Stock Exchange Group's Data & Analytics Technology division. Analytics has established a very strong reputation for providing prudent and reliable analytic solutions to financial industries. With a strong presence in the North American financial markets and rapidly growing in other markets, the group is now looking to increase its market share globally by building new capabilities as Analytics as a Service - A one-stop-shop solution for all analytics needs through API and Cloud-first approach. Position Summary: Analytics DevOps group is looking for a highly motivated and skilled DevOps Engineer to join our dynamic team to help build, deploy, and maintain our cloud and on-prem infrastructure and applications. You will play a key role in driving automation, monitoring, and continuous improvement in our development, modernizations, and operational processes. Key Responsibilities & Accountabilities: Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Helm Charts, CloudFormation, or Ansible to ensure consistent and scalable environments. CI/CD Pipeline Development: Build, optimize, and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitLab, GitHub, or similar tools. Cloud and on-prem infrastructure Management: Work with Cloud providers (Azure, AWS, GCP) and on-prem infrastructure (VMware, Linux servers) to deploy, manage, and monitor infrastructure and services. Automation: Automate repetitive tasks, improve operational efficiency, and reduce human intervention for building and deploying applications and services. Monitoring & Logging: Work with SRE team to set up monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, or others to ensure high availability and performance of applications and infrastructure. Collaboration: Collaborate with architects, operations, and developers to ensure seamless integration between development, testing, and production environments. Security Best Practices: Implement and enforce security protocols/procedures, including access controls, encryption, and vulnerability scanning and remediation. Provide support for issue resolution related to application deployment and/or DevOps-related activities. Essential Skills, Qualifications & Experience: - Bachelor's or Master's degree in computer science, engineering, or a related field with experience (or equivalent 3-5 years of practical experience). - 5+ years of experience in practicing DevOps. - Proven experience as a DevOps Engineer or Software Engineer in an agile, cloud-based environment. - Strong understanding of Linux/Unix system management. - Hands-on experience with cloud platforms (AWS, Azure, GCP), Azure preferred. - Proficient in Infrastructure automation tools such as Terraform, Helm Charts, Ansible, etc. - Strong experience with CI/CD tools - GitLab, Jenkins. - Experience/knowledge of version control systems - Git, GitLab, GitHub. - Experience with containerization (Kubernetes, Docker) and orchestration. - Experience in modern monitoring & logging tools such as Grafana, Prometheus, Datadog. - Working experience in scripting languages such as Bash, Python, or Groovy. - Strong problem-solving and troubleshooting skills. - Excellent communication skills and ability to work in team environments. - Experience with serverless architecture and microservices is a plus. - Strong knowledge of networking concepts (DNS, Load Balancers, etc.) and security practices (Firewalls, encryptions). - Working in an Agile/Scrum environment is a plus. - Certifications in DevOps or Cloud Technologies (e.g., Azure DevOps Solutions, AWS Certified DevOps) are a plus. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence, and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision-making and everyday actions. Working with us means that you will be part of a dynamic organization of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy, and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it's used for, and how it's obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior DevOps Engineer in our Life Sciences & Healthcare DevOps team, you will have the opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you are passionate about coding in Python or any scripting language, experienced with Linux, and have worked in a cloud environment, we are excited to hear from you! Our team specializes in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have expertise in these areas, we would love to connect with you. You should have at least 7+ years of professional software development experience and 5+ years as a DevOps Engineer or similar role with proficiency in various CI/CD and configuration management tools such as Jenkins, Maven, Gradle, Spinnaker, Docker, Ansible, Cloudformation, Terraform, etc. Additionally, you should possess at least 3+ years of AWS experience managing resources in services like S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue, and Lambda. A minimum of 5 years of experience in Bash/Python scripting, wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols is required. You will be responsible for being on-call for critical production issues and should have a good understanding of SDLC, patching, releases, and basic systems administration activities. It would be beneficial if you also held AWS Solution Architect Certifications and had Python programming experience. In this role, your responsibilities will include designing, developing, and maintaining the product's cloud infrastructure architecture, collaborating with different teams to provide end-to-end infrastructure setup, designing and deploying secure infrastructure as code, staying updated with industry best practices, trends, and standards, owning the performance, availability, security, and reliability of the products running across public cloud and multiple regions worldwide, and documenting solutions and maintaining technical specifications. The products you will be working on rely on container orchestration, Jenkins, various AWS services, Databricks, Datadog, Terraform, and more, supporting the Development team in building them. You will be part of the Life Sciences & HealthCare Content DevOps team, focusing on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. The team consists of five members and reports to the DevOps Manager, providing support for various application products internal to Clarivate. The team also handles Change process on the production environment, Incident Management, Monitoring, and customer service requests. The shift timing for this role is 12PM to 9PM, and you must provide on-call support during non-business hours based on team bandwidth. At Clarivate, we are dedicated to offering equal employment opportunities and comply with applicable laws and regulations governing non-discrimination in all locations.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

pune, maharashtra

On-site

As the Director of Technology for Supply-chain, Logistics, Omni, and Corporate Systems at Williams-Sonoma's Technology Center in Pune, India, you will play a crucial role in leading the engineering teams to develop high-value and high-quality features for the industry-leading engineering delivery. Your responsibilities will encompass attracting, recruiting, and retaining top engineering talent, influencing architectural discussions, strategic planning, and decision-making processes to ensure the creation of impactful, compelling, and scalable solutions incrementally. Your success in this role will be driven by your agility, results orientation, strategic thinking, and innovative approach to delivering software products at scale. You will be responsible for overseeing engineering project delivery, defining and executing the engineering strategy aligned with the company's business goals, and ensuring high-quality deliverables through robust processes for code reviews, testing, and deployment. Collaboration will be a key aspect of your role, as you will actively engage with Product Management, Business Stakeholders, and other Engineering Teams to define project requirements and deliver customer-centric solutions. You will also focus on talent acquisition and development, building a strong and diverse engineering team, implementing an onboarding program, coaching team members for technical expertise and leadership abilities, and maintaining a strong talent pipeline. Your role will involve performance management, technology leadership, continuous education and domain expertise, resource planning and execution, organizational improvement, system understanding and technical oversight, innovation and transformation, as well as additional responsibilities as required. Your expertise in managing projects, technical leadership, analytical skills, business relationships, communication excellence, and execution and results orientation will be critical for success in this role. To qualify for this position, you should have extensive industry experience in developing and delivering Supply Chain and Logistics solutions, leadership and team management experience, project lifecycle management skills, project and technical leadership capabilities, analytical and decision-making skills, business relationships and conflict management expertise, communication excellence, interpersonal effectiveness, execution and results orientation, vendor and stakeholder management proficiency, as well as self-motivation and independence. Additionally, you should hold a Bachelor's degree in computer science, Engineering, or a related field, and possess core technical criteria such as expertise in Java frameworks, RESTful API design, microservices architecture, database management, cloud platforms, CI/CD pipelines, containerization, logging and monitoring tools, error tracking mechanisms, event-driven architectures, Git workflows, and Agile tools. Join Williams-Sonoma, Inc., a premier specialty retailer with a rich history dating back to 1956, and be part of a dynamic team dedicated to delivering high-quality products for the kitchen and home. Take on the challenge of driving innovation and transforming the organization into a leading technology entity with cutting-edge solutions that enhance customer experiences and maintain a competitive edge in the global market.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Cloud Engineer at AVP level in Bangalore, India, you will be responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud Platform (GCP). Your key responsibilities will include designing, deploying, and managing scalable, secure, and cost-effective cloud environments on GCP, developing Infrastructure as Code (IaC) using tools like Terraform, ensuring security best practices, IAM policies, and compliance with organizational and regulatory standards, configuring and managing VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking, setting up CI/CD pipelines for automated deployments, implementing monitoring and alerting using tools like Stackdriver, optimizing cloud spending, designing disaster recovery and backup strategies, deploying and managing GCP databases, and managing containerized applications using GKE and Cloud Run. You will be part of the Platform Engineering Team, which is responsible for building and maintaining foundational infrastructure, tooling, and automation to enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. To excel in this role, you should have strong experience with GCP services, proficiency in scripting and Infrastructure as Code, knowledge of DevOps practices and CI/CD tools, understanding of security, IAM, networking, and compliance in cloud environments, experience with monitoring tools, strong problem-solving skills, and Google Cloud certifications would be a plus. You will receive training, development, coaching, and support to help you excel in your career, along with a culture of continuous learning and a range of flexible benefits tailored to suit your needs. The company strives for a positive, fair, and inclusive work environment where employees are empowered to excel together every day. For further information about the company and its teams, please visit the company website: https://www.db.com/company/company.htm. The Deutsche Bank Group welcomes applications from all individuals and promotes a culture of shared successes and collaboration.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Lead DevOps Engineer at GrowExx, you will collaborate with cross-functional teams to define, design, and implement DevOps infrastructure while adhering to best practices of Infrastructure as Code (IAC). Your primary goal will be to ensure a robust and stable CI/CD process that maximizes efficiency and achieves 100% automation. You will be responsible for analyzing system requirements comprehensively to develop effective Test Automation Strategies for applications. Additionally, your role will involve designing infrastructure using cloud platforms such as AWS, GCP, Azure, or others. You will also manage Code Repositories like GitHub, GitLab, or BitBucket, and automate software quality gateways using Sonarqube. In this position, you will design branching and merging strategies, create CI pipelines using tools like Jenkins, CircleCI, or Bitbucket, and establish automated build & deployment processes with rollback mechanisms. Identifying and mitigating infrastructure security and performance risks will be crucial, along with designing Disaster Recovery & Backup policies and Infrastructure/Application Monitoring processes. Your role will also involve formulating DevOps Strategies for projects with a focus on Quality, Performance, and Cost considerations. Conducting cost/benefit analysis for proposed infrastructures, automating software delivery processes for distributed development teams, and promoting software craftsmanship will be key responsibilities. You will be expected to identify new tools and processes, and train teams on their adoption. Key Skills: - Hands-on experience with LLM models and evaluation metrics for LLMs. - Proficiency in managing infrastructure on cloud platforms like AWS, GCP, or Azure. - Expertise in Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Pulumi. - Managing code repositories using GitHub, GitLab, or Bitbucket, and implementing effective branching and merging strategies. - Designing and maintaining robust CI/CD pipelines with tools like Jenkins, CircleCI, or Bitbucket Pipelines. - Automating software quality checks using SonarQube. - Understanding of automated build and deployment processes, including rollback mechanisms. - Knowledge of infrastructure security best practices and risk mitigation. - Designing disaster recovery and backup strategies. - Experience with monitoring tools like Prometheus, Grafana, ELK, Datadog, or New Relic. - Defining DevOps strategies aligned with project goals. - Conducting cost-benefit analyses for optimal infrastructure solutions. - Automating software delivery processes for distributed teams. - Passion for software craftsmanship and evangelizing DevOps best practices. - Strong leadership, communication, and training skills. Education and Experience: - B Tech or B. E./BCA/MCA/M.E degree. - 8+ years of relevant experience with team-leading experience. - Experience in Agile methodologies, Scrum & Kanban, project management, planning, risk identification, and mitigation. Analytical and Personal Skills: - Strong logical reasoning and analytical skills. - Effective communication in English (written and verbal). - Ownership and accountability in work. - Interest in new technologies and trends. - Multi-tasking and team management abilities. - Coaching and mentoring skills. - Managing multiple stakeholders and resolving conflicts diplomatically. - Forward-thinking mindset.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an experienced L3 Network Engineer/Administrator with 5-8 years of expertise, you will be responsible for designing, implementing, and managing complex network infrastructures. Your role will involve advanced network troubleshooting, optimization, and network security to ensure seamless network connectivity and customer-centric problem-solving. Ideally, you will have experience in a Managed Service Provider environment. Your key responsibilities will include designing, implementing, and maintaining complex network infrastructure. You will be expected to troubleshoot and resolve escalated network issues to ensure optimal performance. Additionally, configuring and managing network hardware such as routers, switches, and firewalls will be a part of your daily tasks. Providing technical leadership and mentoring junior network engineers will also be crucial, along with monitoring and maintaining network security by implementing solutions to mitigate risks effectively. In terms of technical skills, you should be proficient in advanced TCP/IP, DNS, DHCP, VLAN, MPLS, and VPN. Familiarity with high-end routers, switches, firewalls, and load balancers is essential. Knowledge of software such as Cisco IOS, Juniper Junos, Palo Alto PAN-OS, SolarWinds, LogicMonitor, DataDog, and BigPanda is required. Proficiency in protocols like BGP, OSPF, EIGRP, STP, RSTP, VRRP, and experience with security tools like Firewalls (Cisco ASA, Palo Alto, etc.), IDS/IPS, VPNs, and monitoring tools like Nagios, SolarWinds, NetFlow, and PRTG will be beneficial. This is a full-time position, and a Bachelor's degree is preferred for this role. The ideal candidate should have a total of 5 years of work experience. The work location is in person.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

vadodara, gujarat

On-site

You will be responsible for designing, developing, and implementing hybrid cloud environments. You will deploy and automate infrastructure & platform services in Public Clouds (AWS, GCP, and Azure) using Terraform and Ansible. Additionally, you will design and manage Continuous deployment using Kubernetes and Jenkins. It will be your duty to design, implement, and execute Backup and Recovery and Business Continuity processes. You will also be tasked with implementing industry standard security processes for best practices and compliance (SOC2, ISO27001, Fedramp, HIPPA etc) leveraging services in Public Cloud such as AWS GuardDuty, Web Application Firewall, and Cloudtrail. Monitoring environments for security vulnerabilities, taking actions to remediate and/or mitigate risks, and monitoring applications and services within the environments will be part of your routine. You will be expected to be on-call rotation using Datadog, Elastic Search, and Opsgenie, taking actions to resolve issues and implementing strategies to prevent future occurrences. Troubleshooting and root cause analysis for Service Incidents using Jira Service Desk and the alerting and monitoring tools documented above will also fall under your responsibilities. Setting up intelligent application performance alerts in Datadog and ElasticSearch to identify and resolve issues before they impact business services and end-users will be crucial. It is essential to continuously learn about technologies outside of your realm of expertise that help drive. You will work collaboratively with software engineering to develop/deploy our systems. To succeed in this role, you should have an understanding of how cloud-based web applications work and an interest in measuring, analyzing, and improving distributed systems. Familiarity with web application development using Javascript, Java, AngularJS, PostgreSQL, or SQL Server database is required. You must possess 5-7 years of experience with Public Cloud Deployments, both Hybrid and Pure public cloud deployments. Experience with Docker and Kubernetes in production, automation tools like Terraform or Ansible, Networking and security Technology for cloud services, Continuous Deployment tools such as Jenkins or CircleCI, and Logging and Monitoring tools for SaaS such as ELK, Splunk, Datadog etc is essential. Strong communication skills, both written and verbal, are necessary. Being well-organized, able to take direction and work independently, possessing team working skills, and holding a BS or MS in Computer Science are all important qualifications for this role.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

rajasthan

On-site

As a Staff Software Engineer at SpoonLabs, you will be responsible for contributing to the architecture of Spoon and Vigloo. You will play a key role in designing and implementing scalable and efficient architecture solutions. Your primary focus will be on XP (eXtreme Programming) practices such as Simple Design, Small Release, TDD, and Pair Programming to ensure high-quality deliverables. You will collaborate with the team to continuously improve the architecture and maintain a sustainable codebase. In this role, you will work closely with the Spoon and Vigloo teams to drive innovation and deliver cutting-edge solutions. Spoon can be accessed at https://www.spooncast.net/kr, while Vigloo can be accessed at https://www.vigloo.com/ko. Key responsibilities include participating in CI/CD processes, leveraging technologies like Spring Boot and Kotlin/Java, and working with AWS, Kubernetes, and Docker. Additionally, you will have the opportunity to explore Reactive Programming and Kotlin Coroutines, along with monitoring tools such as Datadog, Prometheus, and Sentry. You will be involved in the continuous improvement of architecture by identifying and implementing best practices. Collaboration with cross-functional teams is essential to ensure seamless integration and deployment. You will also be responsible for ensuring the scalability and performance of the applications. The ideal candidate should have a deep understanding of XP practices and be proficient in Spring Boot, Kotlin/Java. Experience with AWS, Kubernetes, Docker, and CI/CD DevOps practices is highly desirable. Knowledge of Reactive Programming and Kotlin Coroutines will be an added advantage. If you are passionate about building robust and scalable software architectures and enjoy working in a dynamic environment, we would love to hear from you. Please send your resume to recruit@spoonlabs.com. Join us at SpoonLabs to be part of a forward-thinking team that values innovation, collaboration, and excellence. Don't miss the opportunity to participate in industry events like AWS re:Invent, Digital Marketing Summit, and MAU Conference. Enhance your skills and grow your career with us!,

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Tittle : GCP Cloud Operations Engineer Location : Hyderabad (hybrid)- India About You: The GCP CloudOps Engineer is accountable for a continuous, repeatable, secure, and automated deployment, integration, and test solutions utilizing Infrastructure as Code (IaC) and DevSecOps techniques 8+ years of hands-on experience in infrastructure design, implementation, and delivery 3+ years of hands-on experience with monitoring tools (Datadog, New Relic, or Splunk) 4+ years of hands-on experience with Container orchestration services, including Docker or Kubernetes, GKE. Experience with working across time zones and with different cultures. 5+ years of hands-on experience in Cloud technologies GCP is preferred. Maintain an outstanding level of documentation, including principles, standards, practices, and project plans. Having experience building a data warehouse using Databricks is a huge plus. Hands-on experience with IaC patterns and practices and related automation tools such as Terraform, Jenkins, Spinnaker, CircleCI, etc., built automation and tools using Python, Go, Java, or Ruby. Deep knowledge of CICD processes, tools, and platforms like GitHub workflows and Azure DevOps. Proactive collaborator and can work in cross-team initiatives with excellent written and verbal communication skills. Experience with automating long-term solutions to problems rather than applying a quick fix. Extensive knowledge of improving platform observability and implementing optimizations to monitoring and alerting tools. Experience measuring and modeling cost and performance metrics of cloud services and establishing a vision backed by data. Develop tools and CI/CD framework to make it easier for teams to build, configure, and deploy applications Contribute to Cloud strategy discussions and decisions on overall Cloud design and best approach for implementing Cloud solutions Follow and Develop standards and procedures for all aspects of a Digital Platform in the Cloud Identify system enhancements and automation opportunities for installing/maintaining digital platforms Adhere to best practices on Incident, Problem, and Change management Implementing automated procedures to handle issues and alerts proactively Experience with debugging applications and a deep understanding of deployment architectures. Pluses: Databricks Experience with the Multicloud environment (GCP, AWS, Azure), GCP is the preferred cloud provider. Experience with GitHub and GitHub Actions  Thanks and Regards, Sandeep Reddy Senior Resource Coordinator Intune Systems Inc. 📞 USA: +1 214-230-2747 📞 India (WhatsApp & Call): +91 98857 57527 🏢 Address: 3620 N Josey Ln, #220C Carrollton TX 75007 USA

Posted 2 weeks ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Mumbai, Bengaluru

Work from Office

Location PAN India As per companys designated LTIM locations Shift Type Rotational Shifts including Night Shift and Weekend Availability Experience 7 Years of Exp Job Summary We are looking for a skilled and adaptable Site Reliability Engineer SRE Observability Engineer to join our dynamic project team The ideal candidate will play a critical role in ensuring system reliability scalability observability and performance while collaborating closely with development and operations teams This position requires strong technical expertise problemsolving abilities and a commitment to 247 operational excellence Key Responsibilities Site Reliability Engineering Design build and maintain scalable and reliable infrastructure Automate system provisioning and configuration using tools like Terraform Ansible Chef or Puppet Develop tools and scripts in Python Go Java or Bash for automation and monitoring Administer and optimize LinuxUnix systems with a strong understanding of TCPIP DNS load balancers and firewalls Implement and manage cloud infrastructure across AWS or Kubernetes Maintain and enhance CICD pipelines using tools like Jenkins ArgoCD Monitor systems using Prometheus Grafana Nagios or Datadog and respond to incidents efficiently Conduct postmortems and define SLAsSLOs for system reliability and performance Plan for capacity and performance using benchmarking tools and implement autoscaling and failover systems Observability Engineering Instrument services with relevant metrics logs and traces using OpenTelemetry Prometheus Jaeger Zipkin etc Build and manage observability pipelines using Grafana ELK Stack Splunk Datadog or Honeycomb Work with timeseries databases eg InfluxDB Prometheus and log aggregation platforms Design actionable s and dashboards to improve system observability and reduce fatigue Partner with developers to promote observability best practices and define key performance indicators KPIs Required Skills Qualifications Proven experience as an SRE or Observability Engineer in complex production environments Handson expertise in LinuxUnix systems and cloud infrastructure AWSKubernetes Strong programming and scripting skills in Python Go Bash or Java Deep understanding of monitoring logging and ing systems Experience with modern Infrastructure as Code and CICD practices Ability to analyze and troubleshoot production issues in realtime Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability Additional Requirements Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability

Posted 2 weeks ago

Apply

6.0 - 8.0 years

10 - 20 Lacs

Visakhapatnam

Work from Office

Role & responsibilities We are looking for an experienced and driven Senior Site Reliability Engineer (SRE) to architect, implement, and maintain robust cloud infrastructure. This role demands a deep understanding of AWS, Kubernetes, ECS, and the ability to build scalable, secure, and highly available infrastructure from scratch. The ideal candidate will be a strong advocate for DevOps principles, automation, and reliability, and will possess the skills to support and optimize complex microservices-based architectures. Key Responsibilities Infrastructure Design & Implementation Design and build highly scalable, fault-tolerant, and secure cloud infrastructure using AWS, Kubernetes, and ECS. Lead efforts in infrastructure as code (IaC) using tools like Terraform or CloudFormation. Develop and enforce best practices for infrastructure provisioning, security, and cost optimization. System Reliability & Performance Ensure availability, performance, scalability, and security of production systems. Implement observability strategies including monitoring, logging, and alerting using tools such as Prometheus, Grafana, ELK, or Datadog. Analyse system performance metrics and proactively identify potential issues and bottlenecks. DevOps & Automation Build and maintain CI/CD pipelines to streamline code deployments across environments. Drive automation in infrastructure provisioning, configuration management, and operational tasks. Ensure repeatable and reliable deployments using containers and orchestration tools like Kubernetes and ECS. Service Management Own the SRE lifecycle, including incident management, postmortems, root cause analysis, and runbook creation. Collaborate closely with development and QA teams to ensure seamless microservices integration, deployment, and lifecycle management. Maintain service-level objectives (SLOs), service-level agreements (SLAs), and error budgets. Security & Compliance Implement and enforce cloud security best practices for networking, identity and access management, and data protection. Support audits, compliance assessments, and vulnerability remediation. Monitor for security anomalies and work with security teams to respond to threats. Technical Skills 6+ years of hands-on experience in Site Reliability Engineering, DevOps, or Cloud Engineering. Expertise in AWS services such as EC2, S3, RDS, IAM, VPC, Lambda, CloudWatch, etc. Strong knowledge of Kubernetes and container orchestration best practices. Experience managing services on Amazon ECS (Fargate or EC2). Proficient in infrastructure-as-code tools like Terraform, CloudFormation, or Pulumi. Skilled in scripting languages such as Python, Bash, or Go. Solid grasp of networking, load balancing, DNS, and firewall rules in cloud environments. Deep understanding of microservices architectures, API gateways, and service meshes. Soft Skills Proven leadership and cross-functional collaboration skills. Strong problem-solving and incident-resolution mindset. Clear communication, documentation, and stakeholder reporting abilities. Passion for continuous improvement and automation. Preferred Qualifications AWS certifications such as AWS Certified DevOps Engineer, Solutions Architect Professional, or equivalent. Familiarity with service meshes like Istio or Linkerd. Experience with serverless architectures and event-driven systems. Knowledge of regulatory compliance (SOC2, ISO 27001, GDPR) in cloud environments.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a DevOps Support Engineer to join one of our clients ' teams in India who can start in August. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you. Key Responsibiliti esMonitor and troubleshoot Azure and AWS environments to ensure optimal performance and availabili tyRespond promptly to incidents and alerts, investigating and resolving issues efficient lyPerform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Pytho n)Communicate clearly and fluently in English with customers and internal tea msCollaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflo wsWork in a rotating shift schedule, including weekends and nights, ensuring continuous support covera ge Shift De tailEngineers rotate shifts, typically working 4–5 shifts per w eeksEach engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the teamRotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team Qualifica tions2–5 years of experience in DevOps or cloud support roles(SLA L evel)Strong familiarity with AWS and Azure cloud environ mentsExperience with CI/CD tools such as GitHub Actions or Je nkinsProficiency with monitoring tools like Datadog, CloudWatch, or si milarBasic scripting skills in Bash, Python, or comparable lan guageExcellent communication skills in En glishComfortable and willing to work in a shift-based support role, including night and weekend s hiftsPrior experience in a shift-based support environment is pref erred What We OfferRemote work opportunity — work from anywhere in India with a stable internet conn ectionComprehensive training program inc ludingShadowing existing processes to gain hands-on expe rienceLearning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing s uccess

Posted 2 weeks ago

Apply

4.0 years

0 - 0 Lacs

Ahmedabad, Gujarat, India

On-site

Experience : 4.00 + years Salary : USD 1794-2297 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full Time 6 months Project Based Employment(Payroll and Compliance to be managed by: Uplers Solutions Pvt. Ltd.) (*Note: This is a requirement for one of Uplers' client - US top Auto Inspection company) What do you need for this opportunity? Must have skills required: CloudWatch, Serverless Architecture, Python/Bash/Powershell, Terraform/Terragrunt, Agile, AWS, Azure DevOps, Docker, Kubernetes US top Auto Inspection company is Looking for: Key Responsibilities: Design, implement, and manage CI/CD pipelines using Azure DevOps and other tools. Build, maintain, and scale infrastructure on AWS using Terraform and Terragrunt. Automate infrastructure provisioning, configuration management, and application deployment. Implement monitoring, alerting, and logging solutions to ensure system reliability and performance. Collaborate with software engineers, QA, and security teams to improve release velocity and system stability. Define and enforce best practices for infrastructure and deployment workflows. Support cloud migration, cost optimization, and performance tuning initiatives. Required Skills and Qualifications: Technical Skills: 3+ years of experience in a DevOps role. Strong hands-on experience with Azure DevOps (Pipelines, Repos, Artifacts). Deep understanding of AWS services like EC2, EBS, S3, IAM, VPC, RDS, EKS, etc. Proven experience with Terraform and Terragrunt for IaC and managing multi-environment setups. Proficiency with scripting (e.g., Bash, Python, PowerShell). Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with observability tools (e.g., CloudWatch, ELK, Datadog etc). Strong understanding of networking, security, and system architecture in the cloud. Soft Skills Excellent problem-solving and analytical skills to identify root causes of issues and recommend solutions. Strong communication skills to collaborate with team members and articulate technical concepts to non-technical stakeholders. Ability to interpret business requirements and translate them into effective test strategies. Team player with a proactive attitude and the ability to work independently with minimal supervision. Nice to have: Certifications: AWS Certified DevOps Engineer, Kubernetes Administrator, etc. Experience with serverless architectures Exposure to Agile/Scrum development practices How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Chennai

Work from Office

Dear Candidate, Greetings from Genworx.ai About Us Genworx.ai is a pioneering startup at the forefront of generative AI innovation, dedicated to transforming how enterprises harness artificial intelligence. We specialize in developing sophisticated AI agents and platforms that bridge the gap between cutting-edge AI technology and practical business applications. We have an opening for Principal DevOps Engineer position at Genworx.ai . please find below detailed Job Description for your understanding. Required Skills and Qualifications: Job Title: Principal DevOps Engineer Experience: 8+ years with atleast 5+ years in cloud automation Education: Bachelors or Masters degree in Computer Science, Engineering or a related field Work Location: Chennai Job Type: Full-Time Website: https://genworx.ai/ Key Responsibilities: Cloud Strategy and Automation Leadership: Architect and lead the implementation of cloud automation strategies with a primary focus on GCP. Integrate multi-cloud environments by leveraging AWS and/or Microsoft Azure as needed. Define best practices for Infrastructure as Code (IaC) and automation frameworks. Technical Architecture & DevOps Practice Design scalable, secure, and efficient CI/CD pipelines using industry-leading tools. Lead the development and maintenance of automated configuration management systems. Establish processes for continuous integration, delivery, and deployment of cloud-native applications. Develop solutions for cloud optimization and performance tuning Create reference architectures for DevOps solutions and best practices Establish standards for cloud architecture, versioning, and governance Lead cost optimization initiatives for cloud infrastructure using GenAI Security, Compliance, & Best Practices Enforce cloud security standards and best practices across all automation and deployment processes. Implement role-based access controls and ensure compliance with relevant regulatory standards. Continuously evaluate and enhance cloud infrastructure to mitigate risks and maintain high security Research & Innovation: Drive research into emerging GenAI technologies and techniques in cloud automation and DevOps. Lead proof-of-concept development for new AI capabilities Collaborate with research teams on model implementation and support Guide the implementation of novel AI architectures Leadership & Mentorship: Provide technical leadership and mentorship to teams in cloud automation, DevOps practices, and emerging AI technologies. Drive strategic decisions and foster an environment of innovation and continuous improvement. Act as a subject matter expert and liaison between technical teams, research teams, and business stakeholders Technical Expertise: Cloud Platforms: Deep GCP expertise with additional experience in AWS and/or Microsoft Azure. DevOps & Automation Tools: Proficiency in CI/CD tools (e.g., GitHub Actions, GitLab, Azure DevOps) and Infrastructure as Code (e.g., Terraform). Containerization & Orchestration: Experience with Docker, Kubernetes, and container orchestration frameworks. Scripting & Programming: Strong coding skills in Python, Shell scripting, or similar languages. Observability: Familiarity with tools like Splunk, Datadog, Prometheus, Grafana, and similar solutions. Security: In-depth understanding of cloud security, identity management, and compliance requirements. Interested candidates, kindly send your updated resume and a link to your portfolio to anandraj@genworx.ai . Thank you Regards, Anandraj B Lead Recruiter Mail ID: anandraj@genworx.ai Contact: 9656859037 Website: https://genworx.ai/

Posted 2 weeks ago

Apply

5.0 years

6 - 10 Lacs

Hyderābād

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: AWS Senior Data Engineer Experience Required: Minimum 5+ years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills. Good Additional Skills: Familiarity with Terraform for infrastructure as code. Experience with messaging services such as SNS and SQS. Knowledge of monitoring and logging tools like CloudWatch, Datadog, and Splunk. Experience with AWS DataSync, DMS, Athena, and Lake Formation. Communication Skills: Excellent verbal and written communication skills are mandatory for effective collaboration with team members and stakeholders. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

0 years

5 - 6 Lacs

Bengaluru

On-site

Expertise in development using Core Java, J2EE,Spring Boot, Microservices, Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/ Bitbucket, Sonar, Black Duck, Splunk, Apigee etc. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Good understanding of React JS ,Photon framework , Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

8.0 years

18 Lacs

India

On-site

We are currently seeking a technically proficient and hands-on DevOps Manager/Team Lead to spearhead our DevOps initiatives across a diverse portfolio of applications, encompassing both modern and legacy systems. This includes platforms such as Odoo (Python), Magento (PHP), Node.js, and other web-based applications.The ideal candidate will bring significant expertise in continuous integration/continuous delivery (CI/CD), automation, containerization, and cloud infrastructure (AWS, Azure, GCP). We highly value candidates holding relevant professional certifications.In this role, your key responsibilities will include:* Leading, mentoring, and fostering the growth of our DevOps engineers.* Overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), Node.js (JavaScript/TypeScript), and other LAMP/LEMP stack applications.* Designing and managing CI/CD pipelines tailored to each application, utilizing tools like Jenkins, GitHub Actions, and GitLab CI.* Managing environment-specific configurations for staging, production, and QA environments.* Implementing containerization for both legacy and modern applications using Docker and orchestrating deployments with Kubernetes (EKS/AKS/GKE) or Docker Swarm.* Establishing and maintaining Infrastructure as Code practices using Terraform, Ansible, or CloudFormation.* Implementing and maintaining application and infrastructure monitoring solutions using tools like Prometheus, Grafana, ELK, or Datadog.* Ensuring the security, resilience, and compliance of our systems with industry standards.* Optimizing cloud costs and infrastructure performance.* Collaborating effectively with development, QA, and IT support teams to ensure seamless delivery processes.* Troubleshooting performance, deployment, and scaling challenges across various technology stacks.We are looking for someone with the following essential skills:* Over 8 years of hands-on experience in DevOps, Cloud, or System Engineering roles.* At least 2 years of experience in managing or leading DevOps teams.* Proven experience supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups.* Solid experience with AWS, Azure, or GCP infrastructure in production environments.* Strong scripting abilities in Bash, Python, PHP CLI, or Node CLI.* A deep understanding of Linux system administration and networking fundamentals.* Experience with Git, SSH, reverse proxies (Nginx), and load balancers.* Excellent communication skills, including experience in managing client interactions.While not mandatory, the following certifications are highly valued:* AWS Certified DevOps Engineer – Professional* Azure DevOps Engineer Expert* Google Cloud Professional DevOps Engineer* Bonus: Magento Cloud DevOps or Odoo Deployment ExperienceAdditionally, the following skills would be a valuable asset:* Experience with multi-region failover, high availability (HA) clusters, or Recovery Point Objective (RPO)/Recovery Time Objective (RTO)-based design.* Familiarity with MySQL/PostgreSQL optimization and message brokers like Redis, RabbitMQ, or Celery.* Previous experience with GitOps practices and tools like ArgoCD, Helm, or Ansible Tower.* Knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices.Thank you for your time and consideration. Job Type: Full-time Pay: Up to ₹1,800,000.00 per year Work Location: In person Speak with the employer +91 8861265053

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies