Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Application Developer Specialist Department : Feet, Fuel, Services & Analytics Servic (BP15054) Location Bangalore, India Department UD Digital Solutions & IT - UD Connected Solutions (UDCS) / Fleet & Fuel Services (FFS) Job Overview UD Trucks is known for our pioneering technologies and products within the commercial automotive industry. We are looking for a business minded Full Stack Developer to support the product area of Fleet and Fuel Services. As part of our forward thinking, market leading inhouse Digital Solutions & IT team, you will be responsible for developing and maintaining the IT solutions in accordance with the UD DS & IT Architecture. Responsibilities Understand business requirements, customer needs and processes, by working with Business Analysts/ Product Architects / Product Owners and Product Area Development Teams across functions. Create User Stories in Jira as per the requirements and aligned to Minimal Viable Product approach of building the services. Be accountable for the Build, test and deploy of software codes. Work in a pair programming environment using Test Driven Development (TDD) approach Ability to work closely with infrastructure teams such as DBA, Network etc. Perform code reviews, confirm against business needs, and test acceptance criteria Discuss release and deployment plans with the release manager. Build secure, scalable, high-performance services Evaluate technologies and build prototypes for continuous improvements by taking part in pre-studies and proof of concepts. Work in a collaborative environment within Product team and with teams in other regions (Japan, China). Work independently and be self-driven w/o much supervision. Willingness to take on new and challenging assignments. Be a quick learner, adaptive; high ability to think out of the box. Required skills: 6+ years of hands-on experience in Java Technologies: Java, J2EE, Java 8 Experience working with Spring Boot, Hibernate, Spring JPA Hands-on experience in Mongo DB Experience working with RESTful Web Services, Apache Maven, Apache ActiveMQ, Apache Tomcat and JBoss Server Experience working with JUnits "Good to have worked with Cloud AWS Hands-on experience working in Microservice Architecture. Knowledge on Rest API integration, Jest Testing framework Desired Skills Experience working with Material UI, Bootstrap Experience working with Charts, Google maps, Angular, Angular JS, npm, Yarn, Node, GIT Experience on Completable Futures Knowledge on monitoring applications like Graphana/Datadog Experience as a scrum master is a value add About UD Trucks Part of the Isuzu group, UD Trucks is a global leading international commercial vehicle solutions provider headquartered in Japan. At UD Trucks, we are defining the next generation of smart logistics solutions through advanced innovations in automation, electro mobility and connectivity. UD Trucks develops, manufactures and sells a wide range of heavy, medium and light-duty trucks, operating in more than 60 countries across all continents. Our trucks and people go the extra mile for our customers and business partners, day in and day out. We are an 8,000+ strong team of colleagues with 40 nationalities who bring diversity and passion in delivering our products and services. We trust each other, work collaboratively and embrace change. At UD Trucks, our purpose is Better Life – to make life better for people and the planet. We have developed a culture that promotes: Diverse and friendly culture – Strong culture of diversity and inclusion, organizing annual events, daily activities and open communication platforms including various internal voluntary networks. Empowered growth – Global exposure and growth opportunities across functions and countries through internal mobility system and self-driven career opportunities, building a learning organization by enabling self-managed learning supported by the UD Academy. Flexibility with trust – We continue to fully support both remote working (where and when applicable) and flexible working hours, we actively encourage our colleagues to maintain a good work/life balance. You will have the autonomy and flexibility to split your working time between both our wonderful, modern and equipped HQ and remotely. Be part of our journey to create Better Life for society, for our customers and for yourself. UD Trucks is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all our colleagues. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Manager, Commercial Sales, you will provide strategy, mentorship, and guidance for a team of Commercial Account Executives. This role impacts one of the largest lines of business in the company, ultimately bringing a proven product to a multi-billion dollar market. At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them. What You’ll Do: Motivate your team to exceed objectives Develop and maintain accurate team forecasts Hire, oversee, and train a team of Account Executives Drive lead generation, high activity standards, and pipeline management Facilitate the ramp up for all new team members, coaching them to success Monitor the sales funnel from prospect to close and create metrics to improve performance Who You Are: Experienced in hi-tech direct sales Familiar with SaaS/Cloud and Salesforce Able to accurately forecast team performance Passionate about coaching, mentorship, and team management Motivated by helping people grow with a desire to continue learning A self-starter who thrives in a high-growth, rapidly changing marketplace Proven in overseeing an outbound sales team in all areas of the sales cycle Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you’re passionate about technology and want to grow your experience, we encourage you to apply. Benefits and Growth: High income earning opportunities based on performance Opportunity for Presidents Club New hire stock equity (RSU) and employee stock purchase plan (ESPP) Continuous professional development, product training, and career pathing Sales training in MEDDIC and Command of the Message Intra-departmental mentor and buddy program for in-house networking An inclusive company culture, opportunity to join our Community Guilds Generous global benefits Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog. About Datadog: Datadog (NASDAQ: DDOG) is a global SaaS business, delivering a rare combination of growth and profitability. We are on a mission to break down silos and solve complexity in the cloud age by enabling digital transformation, cloud migration, and infrastructure monitoring of our customers’ entire technology stacks. Built by engineers, for engineers, Datadog is used by organizations of all sizes across a wide range of industries. Together, we champion professional development, diversity of thought, innovation, and work excellence to empower continuous growth. Join the pack and become part of a collaborative, pragmatic, and thoughtful people-first community where we solve tough problems, take smart risks, and celebrate one another. Learn more about #DatadogLife on Instagram, LinkedIn, and Datadog Learning Center. Equal Opportunity at Datadog: Datadog is an Affirmative Action and Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Here are our Candidate Legal Notices for your reference. Your Privacy: Any information you submit to Datadog as part of your application will be processed in accordance with Datadog’s Applicant and Candidate Privacy Notice. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description - DevOps Engineer This role involves strong leadership with proven skills in people management, project management and Agile Scrum Development methodologies incorporating DevOps best practices in the team. Responsibilities: ● Monitor the progress of technical personnel, ensuring that application development and deployment is done in the best possible way, and implements quality control and review systems throughout the development and deployment processes. ● Be responsible for ensuring that the DevOps strategy is implemented in the end-to-end development of the product, while ensuring scalability, stability and High Performance. ● Find ways to improve the existing architecture of the product keeping in mind the various automation tools available and the skills required. ● Managing other DevOps roles and obtaining full efficiency from the team will be the primary target. ● Employ and leverage standard tools and techniques to maximize team effectiveness (physical and/or virtual boards, collaboration and productivity software, etc. ). ● Be an evangelist for DevOps technology and champion agile development best practices, including automated testing using CI/CD, Perl/Python/Groovy/Java/Bash ● Be managing build and release and provide CI/CD expertise to agile teams in the enterprise and Infrastructure automation using Ansible, and IAC tools. ● Work on a cloud-based infrastructure spanning Amazon Web Services, Microsoft Azure, and Google Cloud ● Be responsible for defining the business continuity framework/ disaster recovery of the group. ● Evaluate and collaborate with cross-functional teams on how to achieve strategic development objectives using DevOps methodologies. ● Work with senior software managers and architects to develop multi-generation applications/product/cloud plans. ● Work with tech partners and professional consultants for great and successful DevOps and Microservices journey adoption or implementations. ● Contribute to and create integrations and orchestration blueprints. Technical Expertise: ● Deep knowledge of Infrastructure, Cloud, DevOps, SRE, Database Management, Observability, and Cybersecurity services. ● Solid 4+ years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments ● Strong Experience with Databases, dataops (MySql. PostgreSQL, MongoDB, ElasticSearch, Kafka) ● Hands-on experience with ELK or other logging and observability tools ● Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty ● Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc ● Good with Python/Go Scripting Automation ● Strong with fundamentals like DNS, Networking, Linux ● Experience with APM tools like - Newrelic, Datadog, and OpenTelemetry ● Good experience with Incident Response, Incident Management, Writing detailed RCAs ● Experience with Git and coding best practices ● Solutioning & Architecture: Proven ability to design, implement, and optimize end-to-end cloud solutions, following well-architected frameworks and best practices. ● Expertise in developing software applications and managing high demand infrastructure. ● Deploying the SaaS products across major cloud providers. - AWS, Azure, & GCP(way more things beyond compute)Hands on experience with DevOps and related best practices with bash, PowerShell, python, etc. ● Experience with object-oriented design using programming languages Python/Java/Go/Bash. ● Decent skills in configuration management/IAC tools. - Ansible and Terraform. ● Good Database management skills. - MongoDB, Elasticsearch, MySQL, ScyllaDB, Redis, Kafka etc. ● Strong understanding and hands-on experience of the cloud-native landscape. - Orchestration using k8s, observability, SecOps, etc. ● Experience with source code management, building CI/CD pipelines, and GitOps is a plus. ● Good communication skills to collaborate with clients and cross-functional teams ● Problem-solving attitude. ● Collaborative team spirit. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the role We are seeking a skilled Engineer with exceptional DevOps /SRE skills to join our team in Bangalore . Responsibilities include automation and scaling of Big Data and Analytics infrastructure , including massively parallel processing (MPP) databases, in public clouds . Work also includes building CI/CD pipelines, setting up monitoring and alerting for production infrastructure, and updating and maintaining supported systems . What you’ll be doing Develop best practices around cloud infrastructure provisioning, disaster recovery, and developer onboarding. Maintain and scale massively parallel processing (MPP) databases Collaborate with developers on optimal system architecture for scaling, resource utilization , fault tolerance, reliability, and availability Conduct low-level systems debugging, performance measurement and optimization on large production clusters and low-latency services Create scripts and automation that can react quickly to infrastructure issues and take corrective actions Participate in architecture discussions, influence product roadmap s , and take ownership and responsibility over new projects Collaborate and communicate with a geographically distributed team We’re excited if you have 8+ years of experience in DevOps or Site Reliability Engineering (SRE) Cloud infrastructure such as Amazon AWS, Google Cloud Platform (GCP), Microsoft Azure, or other p ublic c loud platforms - GCP is preferred. At least 3 of the following technologies/tools: Big Data / Hadoop , Kafka, Spark, Airflow, Trino , Druid, Hive , Pinot or SyllaDB . Experience with Kubernetes, Docker and Terraform Strong background in Linux/Unix. System engineering around edge cases, failure modes, and disaster recovery Experience with shell scripting, or equivalent programming skills in Python Experience working with monitoring and alerting tools such as Datadog / Prometheus/Grafana and PagerDuty, and being part of on call rotations Experience with Networking, Network Security, Data Security Bachelor’s degree, or equivalent work experience. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Sales Development Representative (SDR), you will prospect, qualify, and generate customer leads to assist in Datadog’s overall business growth segment. By partnering with internal stakeholders, you will help IT and Technology innovators across markets recognize Datadog’s impact in their digital transformation and migration to the cloud. SDRs have the opportunity to grow their careers in Sales and continue contributing to Datadog team success. At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them. What You’ll Do: Collaborate cross-functionally with various Datadog teams Drive initial prospect qualification and schedule discovery meetings Develop, present, and implement strategies for acquiring new business Conduct outbound outreach by cold calling and emailing prospective customers Learn to follow a well-defined methodology to help identify a customer's unique needs Who You Are: Motivated by a career in sales Someone with an innate curiosity to learn Have a desire to succeed alongside teammates Proven in your written and verbal communication Comfortable with being able to learn from rejection Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you’re passionate about technology and want to grow your experience, we encourage you to apply. Benefits and Growth: High income earning opportunities based on self performance New hire stock equity (RSU) and employee stock purchase plan (ESPP) Continuous professional development, product training, and career pathing Sales training in MEDDIC and Command of the Message Intra-departmental mentor and buddy program for in-house networking An inclusive company culture, opportunity to join our Community Guilds Generous global benefits Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog. About Datadog: Datadog (NASDAQ: DDOG) is a global SaaS business, delivering a rare combination of growth and profitability. We are on a mission to break down silos and solve complexity in the cloud age by enabling digital transformation, cloud migration, and infrastructure monitoring of our customers’ entire technology stacks. Built by engineers, for engineers, Datadog is used by organizations of all sizes across a wide range of industries. Together, we champion professional development, diversity of thought, innovation, and work excellence to empower continuous growth. Join the pack and become part of a collaborative, pragmatic, and thoughtful people-first community where we solve tough problems, take smart risks, and celebrate one another. Learn more about #DatadogLife on Instagram, LinkedIn, and Datadog Learning Center. Equal Opportunity at Datadog: Datadog is an Affirmative Action and Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Here are our Candidate Legal Notices for your reference. Your Privacy: Any information you submit to Datadog as part of your application will be processed in accordance with Datadog’s Applicant and Candidate Privacy Notice. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We are looking for a passionate and accomplished individual to join our Database Performance and Operability team at Fanatics Commerce. As a Staff Cloud Database Engineer, you will be performing the database related activities including: Be the DB expert to reach out regarding NoSQL DBs. Stay up to date with the latest developments in the NoSQL space. Develop and maintain working relationships with engineering leads across our distributed teams to collaborate on initiatives. Review data models created by application teams. Manage our Cassandra and Scylla clusters with a focus on capacity management, cost optimization, high availability and performance. Work with large database clusters handling 1M+ IOPS in aggregate. Create automations to reduce toil. Create automations for monitoring and alerting. Create runbooks for alert handling. Setup backup and restore mechanisms. Troubleshoot and resolve various issues with cluster related to nodes, table data, high load, connectivity issues from clients, etc. Be the on-call support engineer on a rotational basis. Qualifications Overall 12+ years of experience in SQL and NoSQL databases. Experience managing large-scale Cassandra and Scylla clusters with in depth understanding of architecture, storage, replication, schema design, system tables, logs, DB processes, tools and CQL. Experience with MySQL and/or Postgres databases. Experience with installation, configuration, upgrades, OS patching, certificate management and scaling (out/in). Experience in setting up backup and restore mechanisms with short RTO and RPO objectives. Experience with infrastructure automation and scripting using Terraform, Python or Bash. Experience working with AWS, with AWS certification preferred. Experience with monitoring tools like Grafana, Prometheus, New Relic, Datadog, etc Experience with managed Cassandra solutions (InstaClustr, Datastax Astra) is a plus Experience with Cloud native distributed databases (eg: TiDB, CockroachDB) is a plus. Experience in planning, scheduling and resourcing to deliver for OKRs and Keeping The Lights On (KTLO) deliverables. Ability to mentor, develop talent and drive technical excellence. Excellent communication skills About Us Fanatics is building a leading global digital sports platform. We ignite the passions of global sports fans and maximize the presence and reach for our hundreds of sports partners globally by offering products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect, and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans; a global partner network with approximately 900 sports properties, including major national and international professional sports leagues, players associations, teams, colleges, college conferences and retail partners, 2,500 athletes and celebrities, and 200 exclusive athletes; and over 2,000 retail locations, including its Lids retail stores. Our more than 22,000 employees are committed to relentlessly enhancing the fan experience and delighting sports fans globally. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanatics.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Associations (UEFA). At Fanatics Commerce, we infuse our BOLD Leadership Principles in everything we do: Build Championship Teams Obsessed with Fans Limitless Entrepreneurial Spirit Determined and Relentless Mindset Show more Show less
Posted 4 weeks ago
0.0 - 2.0 years
0 Lacs
Thiruvananthapuram, Kerala
Remote
Thiruvananthapuram Office, AEDGE AICC India Pvt Ltd About the Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About the role We are looking for a strategic and hands-on DevOps Lead with a focus on Ops AI to spearhead the integration of AI-driven operations into our DevOps practices. The ideal candidate will have deep expertise in automation, infrastructure management, and artificial intelligence applications in operations (AIOps). This role involves leading a DevOps team, designing scalable systems, and implementing intelligent monitoring, alerting, and self-healing infrastructure. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Lead the DevOps strategy with a strong emphasis on AI-enabled operational efficiency. Architect and implement CI/CD pipelines integrated with machine learning models and analytics. Develop and manage infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. Integrate AI/ML tools for predictive monitoring, anomaly detection, and root cause analysis. Collaborate with data scientists, developers, and operations teams to deploy and manage AI-powered applications. Enhance system observability through intelligent dashboards and real-time metrics analysis. Mentor DevOps engineers and promote best practices in automation, security, and performance. Manage cloud infrastructure across AWS, Azure, or Google Cloud platforms. Lead incident response and postmortem processes with AI-generated insights. Ensure compliance, reliability, and high availability of systems through automation and intelligent systems. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 7+ years of DevOps experience with at least 2 years in a leadership role. Strong background in cloud infrastructure management and automation. Experience with AIOps platforms and tools (e.g., Moogsoft, BigPanda, Dynatrace, Datadog, Splunk). Proficient in scripting languages such as Python, Bash, or Go. Hands-on experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.). Familiarity with containerization and orchestration (Docker, Kubernetes, Helm). Excellent understanding of logging, monitoring, and alerting systems. Strong analytical and problem-solving abilities. Exceptional leadership, communication, and collaboration skills. Preferred Qualifications Knowledge of machine learning operations (MLOps). Experience with serverless architectures. Certification in cloud platforms (AWS Certified DevOps Engineer, etc.). Familiarity with chaos engineering and resilience testing tools. Demonstrable experience in building, programming, and integrating software and hardware for autonomous or robotic systems. Proven experience producing computationally efficient software to meet real-time requirements. Background with container platforms such as Kubernetes. Strong analytical skills with a bias for action. Strong time-management and organization skills to thrive in a fast-paced, dynamic environment. Solid written and oral communications skills. Good teamwork and interpersonal skills. Compensation & Benefits For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. #LI-JV1 #LI-Onsite You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Posted 4 weeks ago
0 years
0 Lacs
Anupgarh, Rajasthan, India
On-site
32890BR Noida Job Description Overview of Job Function: This (Technical Manager) Individual will provide strategic leadership to the Cloud Infra and deployment team based in India. This technically advanced manager provides subject matter expertise, advice and consulting around Automation, PlatformOps and Cloud Infra best practices. This role coordinates works with managers and all engineering stakeholders. This manager will be responsible to exceed expectations around Continuous Delivery, Deployments ( BGD / ZDT / Canary), Maintenance Window management and schedules, Deployment SLA’s\SLI`s\SLO`s, Incident Response, Change Failure Rate, Lead Time and MTTR, People management and strategy implementations in a fast-paced Public Cloud environment, preferably AWS. Responsibilities Principal Duties and Essential Responsibilities: As a Technical Manager – Infra Management, you will lead a group of Infra Engineers, responsible for OS Patching/Upgrades, Reboot Orchestration, DB upgrades, Security upgrades and patching, Golden image management and updates, Maintenance Window Schedule and Management, Reporting, MS support interactions, Public Cloud Support interactions and Provisioning/Decommissioning Cloud Infra, based in Bengaluru, working closely with their counterparts in US to drive the adoption, operations, orchestration, securing, monitoring and optimization of Verint’s Cloud platforms. You will contribute to and communicate our vision and mission in close collaboration with client counterpart. Additionally, you will play a key role in planning and delivery of capabilities that contribute to objectives and initiatives at the department level. You will be responsible for growing and coaching engineers, unlocking creativity, and inspiring them to build the best solutions, reduce toil extensively for developers and engineers, and internal clients. You are comfortable with ambiguity, yet you excel at learning and driving clarity. You take end-to-end ownership of your area and embrace iteration, believing that failure—and fail fast; fail early {Its okay to fail}— is a key part of building great team and tech. You will work to break down silos, collaborating closely with platform leaders, product leaders and engineering leaders across Verint to ensure alignment with our vision. People Leadership Inspire and empower multiple multi-functional and cross-functional teams Directly lead Architect and Engineers in multiple teams Nurture, grow and develop management and engineering talent in the team Charter and create L & D paths Technical Leadership Continuous Delivery and Deployment Maintenance Window Management Building Automation and Reducing manual toil extensively Cloud Infrastructure Optimization, right-sizing and Cost Management Incident management, Monitoring and Alerting improvements Continuous Quality and Process Improvement Concentric focus on DevSecOps – for securing Cloud Infrastructure and remediating vulnerabilities Architecture and Product Strategy Thought partner for Product to define, shape and deliver the roadmap Stakeholder engagement and management Architectural and Security Guidance Drive innovation in own team Qualifications Minimum Requirements: BS degree in Computer Science or related technical degree, or equivalent 14+ years of industry experience required. 10+ years managing and maintaining Infrastructure – Public Cloud – Preferably AWS, Azure 5+ Years managing people 5+ years in leadership and architectural role in CD, Automation and Delivery 3+ Years in implementing highly efficient PaaS, SRE and DevOps practices 3+ years in servicing customers in fast paced agile environments 3+ years in – one or many - Terraform, AWS, K8s, Datadog, Change Management, GitOps, Harness, Jenkins, Ansible 3+ years in leading Cloud Infra control, optimization initiatives and cost management Additional Requirements: Strong oral and written communication skills Ability to transform technical knowledge into business language Ability to communicate technical topics to non-technical personnel Strong analytical and problem-solving skills Strong leadership skills with the ability to perform complex scheduling, control and planning functions Demonstrated ability to influence individuals, lead teams and work effectively with executive management. Qualifications Bachelor's Degree Range of Year Experience-Min Year 10 Range of Year Experience-Max Year 17 Show more Show less
Posted 4 weeks ago
3 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for an SRE who is passionate about automating operations, ensuring system reliability, scalability, and performance through engineering and monitoring As an SRE at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing. [insert more about the role, who it reports to, the team it leads, and what the ideal person will become. What You’ll Do Design, automate, and maintain scalable, reliable infrastructure. Implement monitoring, alerting, and incident response processes. Optimize system performance, capacity planning, and cost efficiency. Automate deployments, CI/CD pipelines, and infrastructure as code. Troubleshoot production issues, conduct root cause analysis, and improve system resilience. Collaborate with developers to enhance reliability and performance. Key Metrics Service Uptime (SLA/SLO adherence) – Ensuring high availability and minimal downtime. MTTR (Mean Time to Recovery) – Reducing the time taken to resolve incidents. MTTD (Mean Time to Detect) – Minimizing the time to identify issues. Change Failure Rate – Measuring the percentage of failed deployments. Incident Frequency & Severity – Tracking recurring issues and their impact. Latency & Performance Metrics – Ensuring optimal response times. Automation Coverage – Percentage of manual processes replaced by automation. What You’ll Add To DigitalOcean Experience: 3+ years in SRE, DevOps, or related roles. Cloud Expertise: Hands-on experience with AWS, GCP, or other cloud platforms. Automation & Infrastructure as Code: Proficiency in Terraform, Ansible, or similar tools. Monitoring & Observability: Familiarity with Prometheus, Grafana, Datadog, or similar tools. Containerization & Orchestration: Experience with Kubernetes, Docker, or related technologies. Programming Skills: Proficiency in Python, Go, or Bash scripting. Incident Management: Strong problem-solving skills with a focus on root cause analysis. Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary CORE BUSINESS OPERATIONS At Deloitte, we take immense pride in the dynamic and innovative environment we have cultivated. Our people are our greatest asset, and we are dedicated to fostering a culture of growth, innovation, collaboration, and excellence. We are thrilled to announce that we are expanding our team and are seeking passionate, talented individuals to join us on this exciting journey. The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. Role Level: Consultant As a consultant with us, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Qualifications Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with hands-on Java full stack development, Angular/React, Microservices, Spring boot on cloud technologies Skills / Project Experience: Must Have: 3 to 6 years of hands-on experience in Java full stack development, including Angular/React, Microservices, and Spring Boot on cloud technologies Experience in best practices such as OOPs Principles, Exception handling and usage of Generics and well-defined reusable easy to maintain code and tools like JUnit, Mockito, Check style, SonarQube etc. Experience with SQL databases like MySQL, PostgreSQL, Oracle, and frameworks such as JPA/Hibernate, as well as NoSQL databases like MongoDB and DynamoDB Experience in Agile Frameworks, SDLC lifecycle and tools such as JIRA. Experience with Git/SVN and DevOps processes, including CI/CD (Continuous Integration and Continuous Delivery) Ability to estimate work products Strong interpersonal and communication skills Flexible and innovative, able to apply technical solutions and learnings across varied business domains and industries Proficient with Microsoft Office tools Experience with build tools such as Maven Familiarity with design patterns and proficiency in Object-Oriented design principles, with strong experience in collection implementation Experience with front-end technologies such as HTML, CSS, JavaScript, and modern front-end frameworks like React and Angular Experience with logging frameworks like log4j and Winston Good to Have: Experience with NoSQL databases such as MongoDB, Document DB, Redis, etc., Understanding of cloud platforms, Docker, and Kubernetes Familiarity with microservice architecture and ability to build modular applications Knowledge of Docker (Container Orchestration, Compose) and services like EKS, ECS (AKS/GKS) Understanding of Authentication and Authorization providers (OpenID, SAML, Okta, Keycloak) and analysis tools for SAST and DAST scans Experience with ELK, Splunk, New Relic, Dynatrace, and Datadog Experience with AWS, GCP, Azure, or Oracle Experience in serverless architecture, Lambdas, Reactive Programming, and AI/ML tools for application development The work you will do includes: Understand business requirements and processes Develop software solutions using industry-standard delivery methodologies like Agile and Waterfall across different architectural patterns Write clean, efficient, and well-documented code, maintaining industry and client standards, ensuring code quality and coverage, and debugging/resolving issues Actively participate in Agile processes, including sprint planning, daily stand-ups, and retrospectives Resolve user-reported issues and escalate quality concerns to team leads, scrum masters, or project leaders Develop knowledge in the end-to-end construction cycle, including design (low and high level), coding, unit testing, deployment, and defect fixing, while coordinating with stakeholders Understand UX designs and effectively deliver code using frontend technologies Create and maintain technical documentation, including design specifications, API documentation, and usage guidelines Demonstrate a problem-solving mindset and the ability to analyze business requirements Location: Bengaluru/Hyderabad/Mumbai Core Business Operations The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. The Team Our Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of a Consultant at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300269 Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
Remote
Job Title – Application Support Engineer L3 Location: Remote (To work in Australia time zone 5AM-2PM IST) About the Role As an L3 Application Support Engineer, you will serve as the escalation point for complex technical issues, ensuring high-quality support for our Enterprise SaaS platform used by Health Professionals and Patients. This role is deeply embedded within the Engineering team, requiring strong troubleshooting skills, debugging capabilities, and collaboration with Product and Development teams. You’ll also play a key role in improving documentation, automating processes, and enhancing platform reliability. Key Responsibilities Technical Escalation & Issue Resolution: o Act as the highest level of support within the Support Team. o Investigate and resolve critical incidents, analyzing logs and application behavior. o Work closely with L1/L2 teams to troubleshoot and resolve complex issues. o Replicate and document software bugs for the Development team. Collaboration & Process Improvement: o Work with the Engineering team to debug issues, propose fixes, and contribute to code-level improvements. o Improve support documentation, build playbooks, and optimize incident management processes. o Enhance monitoring and alerting through platforms like Datadog. Technical Operations & Monitoring: o Perform log analysis, SQL queries, and API debugging to diagnose issues. o Monitor AWS infrastructure, CI/CD pipelines, and application performance to identify potential failures proactively. o Maintain uptime and performance using observability tools. Requirements 6+ years in Technical Application Support, DevOps, or Site Reliability Engineering (SRE). Strong troubleshooting skills with technologies such as Node.js, PostgreSQL, Git, AWS, CI/CD. Hands-on experience with monitoring tools like Datadog and uptime monitoring solutions. Proficiency in debugging APIs, SQL queries, and logs. Experience managing support cases through full lifecycle (triage, reproduction, resolution). POSITION DESCRIPTION – Application Support Engineer L3 Ability to write detailed bug reports and collaborate effectively with developers. Strong knowledge of ticketing systems such as Freshdesk, ClickUp, and best practices for incident management. Comfortable with on-call rotations and managing high-priority incidents. Preferred Skills Familiarity with Terraform, Kubernetes, or Docker. Experience writing scripts to automate support tasks. Knowledge of healthcare SaaS environments and regulatory considerations. This role is ideal for problem-solvers who love debugging, enjoy working closely with engineering teams, and thrive in fast-paced, customer-centric environments Key Requirements: Minimum 6+ years in Technical Application Support Strong troubleshooting skills with technologies such as Node.js, PostgreSQL, Git, AWS, CI/CD. Hands-on experience with monitoring tools like Datadog and uptime monitoring solutions Proficiency in debugging APIs, SQL queries, and Perform log analysis Strong knowledge of ticketing systems such as Freshdesk , ClickUp , Exceptional language to handle AUS clients Location: Location: Remote (To work in Australia time zone 5AM-2PM IST) Compensation: Up to Rs.15–20 LPA Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure . Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation . Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS , or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog , and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines ( Airflow, ML Pipelines ) and Bedrock with Tensorflow or PyTorch . Implement and optimize ETL/data streaming pipelines using Kafka , EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash , along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare , VPC , and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary Support for continual learning (free books and online courses) Leveling Up Opportunities Diverse team environment Show more Show less
Posted 4 weeks ago
2 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: The Level 3 Cloud Network Engineer is a senior technical expert responsible for designing, implementing, maintaining, and troubleshooting complex cloud-based and hybrid network environments. This role requires deep expertise in cloud platforms (AWS, Azure, GCP), advanced routing and switching, security configurations, and high availability network architecture. The engineer will lead escalation support for critical incidents, drive network automation efforts, and provide guidance to Level 1 and 2 engineers. Key Responsibilities: • Design and implement scalable, secure, and highly available cloud network architectures. • Configure and manage Virtual Private Clouds (VPCs), Transit Gateways, Direct Connect/ExpressRoute, VPNs, subnets, routing tables, and security groups. • Lead resolution of Level 3 network escalations, performing root cause analysis and preventive maintenance. • Manage hybrid connectivity between on-premise data centers and cloud infrastructure. • Ensure network compliance, auditing, and security controls are in place (e.g., firewall policies, NAC, encryption). • Utilize IaC tools (Terraform, CloudFormation) and automation scripts (Python, Ansible) for network provisioning and management. • Monitor network performance and proactively address latency, jitter, or packet loss issues using tools like CloudWatch, Datadog, NetFlow, or SolarWinds. • Provide mentorship and technical guidance to L1/L2 engineers and participate in knowledge-sharing initiatives. • Collaborate with application, security, and DevOps teams for integrated cloud solutions. • Maintain documentation of network designs, change management logs, and technical procedures. Qualifications: Education & Certifications: • Bachelor’s degree in Computer Science, Information Technology, or related field. • Industry certifications such as: o AWS Advanced Networking Specialty o Cisco CCNP/CCIE o GCP Professional Cloud Network Engineer Experience: • 5+ years of experience in network engineering, with at least 2+ years in cloud network environments. • Proven experience managing complex multi-cloud or hybrid network environments. Technical Skills: • Cloud Platforms: AWS (VPC, TGW, Direct Connect), Azure (VNets, ExpressRoute), GCP (VPC, Interconnects). • Networking Protocols: BGP, OSPF, IPSec, GRE, NAT, DNS, DHCP. • Security: Firewall configuration (Palo Alto, Fortinet, AWS NACLs/Security Groups), VPNs, ZTNA. • Automation & Tools: Terraform, Ansible, Python, Git. • Monitoring & Troubleshooting: Wireshark, OPManager, NetFlow, CloudWatch, ELK, Zabbix, Splunk. Soft Skills: • Strong problem-solving and analytical capabilities. • Excellent communication skills, both written and verbal. • Ability to work independently and manage multiple priorities. • Strong collaboration skills across departments and vendors. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
We’re looking for problem solvers, innovators, and dreamers who are searching for anything but business as usual. Like us, you’re a high performer who’s an expert at your craft, constantly challenging the status quo. You value inclusivity and want to join a culture that empowers you to show up as your authentic self. You know that success hinges on commitment, that our differences make us stronger, and that the finish line is always sweeter when the whole team crosses together. The Senior Software Development Engineer in Test (Senior SDET) will join a team of highly passionate engineers who will be responsible for delivering of Data Analytics products as part of the Alteryx Analytics Cloud Product Suite. Responsibilities Analyse requirements and technical documentation to plan, develop and implement tests. Create, execute, track and maintain manual and automated functional and regression test suites. Develop and maintain testing frameworks and utilities as needed to support testing activities. Lead the testing efforts within a project or team, often coordinating the activities of other SDETs and Developers. Provide detailed problem analysis to the development teams and help them driving the product quality. Participate in code reviews, design reviews, and other team activities to ensure the quality of the codebase. Continuously improve testing processes, methodologies, and practices. Provide guidance and mentorship to junior SDETs. Required Skills 5+ years of experience as a Senior Software Development Engineer in Test or Senior Software Quality Engineer. Master's/Bachelor's degree in computer science, computer engineering or a related field. Extensive experience with API/UI test automation, preferably with frameworks such as PyTest, TestCafe. Excellent designing and programming skills in any object oriented language like Java, Python, JavaScript/TypeScript. Experience with testing REST APIs. Experience with web-based application testing using multiple browsers in different platforms. Excellent communication skills and ability to collaborate successfully with both local and remote team members. Valued/Preferred Skills Experience using GIT or equivalent SCM. Experience with applications and microservices hosted in AWS/Azure/GCP. Experience with containerisation & orchestration technologies like Docker/Kubernetes. Experience with GitOps and package manager tools like ArgoCD, Helm etc. Experience in observability and application monitoring tools like Datadog. Experience with Python, PyTest framework. Knowledge of big data ecosystem, datawarehouse, ETL etc. Experience with relational databases, unix/linux commands, shell scripting. Experience working in an Agile/Scrum driven development environment. Find yourself checking a lot of these boxes but doubting whether you should apply? At Alteryx, we support a growth mindset for our associates through all stages of their careers. If you meet some of the requirements and you share our values, we encourage you to apply. As part of our ongoing commitment to a diverse, equitable, and inclusive workplace, we’re invested in building teams with a wide variety of backgrounds, identities, and experiences. This position involves access to software/technology that is subject to U.S. export controls. Any job offer made will be contingent upon the applicant’s capacity to serve in compliance with U.S. export controls. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. · Job Title: GCP DevOps Engineer · Location: Any where In India(Hybrid) · Experience: 8+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: Job Description – GCP DevOps Engineer Primary Skills: • Kubernetes (GKE, EKS, AKS) • Logging and monitoring (Grafana, Splunk, Datadog) • Networking (Service Mesh, Istio) • Serverless architecture (GCP Functions, AWS Lambda) Good to have: • Monitoring tools (Grafana, Prometheus, etc.) • Networking (VPC, DNS, Load Balancing) Responsibilities: • Design develop and maintain a scalable and highly available cloud infrastructure • Automate and streamline operations and processes • Monitor and troubleshoot system issues • Create and maintain documentation • Develop and maintain tools to automate operational tasks • Collaborate with software engineers to develop and deploy software applications • Develop and manage automated deployment pipelines • Utilize Continuous Integration and Continuous Delivery CICD tools and practices • Provision and maintain cloud-based databases • Optimize resources to reduce costs • Analyse and optimize system performance • Work with the development team to ensure code quality and security • Ensure compliance with security and other industry standards • Keep up with the latest technologies and industry trends • Proficient in scripting languages such as Python BASH PowerShell etc. • Experience with configuration management tools such as Chef Puppet and Ansible • Experience with CICD tools such as Jenkins TravisCI and CircleCI • Experience with container-based technologies such as Docker Kubernetes and ECS • Experience with version control systems such as Git • Understanding of network protocols and technologies • Ability to prioritize tasks and work independently • Strong problem solving and communication skills • Should be able to implement and maintain a highly available scalable and secure cloud infrastructure Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
Remote
Software Support Engineer – AI Applications (WFH) Experience : 4 to 9 years Location : Remote/Bengaluru Mode of Engagement : Full time / Part time No of Positions : 2 Educational Qualifications : Bachelor's degree in computer science, Information Technology, or related field Industry : IT/Software/ITES (B2B/SaaS) Notice Period : Immediate What We Are Looking For: 4–9 years of experience in supporting and scaling AI-based or SaaS applications with strong analytical and debugging skills. Proven ability to contribute to support strategy, process automation, and capability building across AI platforms. Strong collaboration, scripting (Python/Bash), and communication skills to drive cross-functional issue resolution and improvements. Responsibilities: Own the technical support lifecycle for AI applications including troubleshooting, escalation, resolution, and documentation. Drive root cause analysis (RCA) and systemic improvements to reduce repetitive issues and enhance product robustness. Collaborate with engineering, data science, and DevOps teams to support infrastructure scaling and capability enhancement. Identify support trends and advocate for architectural or process improvements. Contribute to the development of self-serve support systems, knowledge bases, and internal tools to reduce manual dependency. Provide strategic insights into support load, performance issues, and user behavior to guide AI application investments. Mentor junior support engineers and contribute to the development of best practices for AI system support. Lead or participate in cross-functional initiatives around AI application monitoring, performance optimization, and incident management. Qualifications: Bachelor's degree in computer science, IT, or related discipline. 4 to 9 years of experience in software support with a focus on AI/ML or complex SaaS applications. Proven ability to manage high-level incidents and work closely with development/product stakeholders. Familiarity with AI/ML workflows, APIs, and model hosting platforms (e.g., AWS Sage maker, Azure ML, Hugging face). Strong scripting/debugging experience using Python, Bash, and SQL. Proficient in using ticketing and monitoring systems like Jira, Datadog, Zendesk, Freshdesk, or equivalent. Strong documentation, stakeholder communication, and problem-resolution capabilities. Ability to operate in a fast-paced environment while driving continuous support improvements. Show more Show less
Posted 4 weeks ago
6 years
0 Lacs
India
On-site
Job Description Designation: RoR Engineer Years of experience: 6 years Job Description: RoR Engineer ! An RoR Engineer is responsible for maintaining all the applications i.e. the primary back-end application API, the order admin tool, the eCommerce application based on Solidus, and various supporting services which are used by our fulfilment partners, web and mobile customer facing applications. Additional Knowledge (will need to learn on the job): Ember.js (UI Framework) Active Admin (Rails Admin Engine) Solidus (Ecommerce Platform) GraphQL gRPC PostgreSQL Apache Kafka Redis AWS Jenkins Kubernetes Roles & Responsibilities: Monitoring #escalated-support and #consumer-eng slack channels and addressing any issues that require technical assistance. Monitoring logs (via rollbar / datadog) and resolving any errors Monitoring Sidekiq’s job morgue and addressing any dead jobs Maintaining libraries in all applications with security updates Security requirements and scope understanding Show more Show less
Posted 4 weeks ago
5 - 8 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a highly skilled Senior DevOps Engineer with over 5 years of hands-on experience in cloud infrastructure and deployment. The ideal candidate will work closely with cross-functional teams to ensure robust observability using Datadog and manage infrastructure as code with Terraform. This role is critical to enhancing platform reliability and performance within an AWS environment. Project Duration: 6 months (initial). It may extend depending on performance or project needs. Work Hours: IST time zone with a minimum 2-hour overlap with the USA team (likely in the evening IST). Mandatory Skills: DevOps on AWS, Datadog, Terraform and General observability experience. Requirements and Qualifications: Minimum 5 years of experience in a DevOps role. Strong hands-on experience with AWS services. Datadog experience is mandatory setup, configuration, and dashboards. Proven expertise with Terraform for infrastructure automation. Prior involvement in implementing or scaling observability solutions. Experience in high-availability environments and production deployments. Strong problem-solving and debugging skills. Ability to collaborate effectively with engineering, QA, and product teams. Roles and Responsibilities: Lead the infrastructure efforts for the Marketplace 2.0 launch. Design, implement, and manage observability solutions using Datadog. Build and maintain scalable infrastructure on AWS using Terraform. Set up alerting and monitoring dashboards to ensure system health and performance. Work with engineering teams to integrate observability best practices. Proactively identify and resolve infrastructure and deployment issues. Contribute to DevOps strategy and improve CI/CD pipelines and processes. Document and enforce infrastructure standards and automation workflows. Location- Remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Software Engineer I Bangalore, India Toast is driven by building the restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. Now, more than ever, we Toast-ers are committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building the restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants, they can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future. Are you bready* to make a change? Toast is scaling rapidly, and the growth is fuelled by its strong Fintech foundation. We are looking for a Software Engineer to join our Fintech Payments team, where you would be leading a functional charter and would get to build, innovate and upscale the next generation of our software solutions. You would be working with highly passionate people who thrive and live to learn and coach. About this roll* (Responsibilities) Design, build, deploy, and maintain highly resilient and scalable features across Toast’s Fintech Line-of-Products. Champion best practices for SDLC & CI/CD life-cycles. Write clean, maintainable code and help build automated tests to ensure software quality. Participate in the rollout of new features, ensuring successful deployment with support from senior team members. Collaborate with peer engineers to review code and provide feedback while learning from the process. Contribute to team discussions on technical decisions, while gaining experience in making sound engineering choices. Work closely with PM, UX, and QA teams to support feature development and gain insight into product requirements. Do you have the right ingredients* ? (Requirements) 1+ years of experience as a software engineer. Familiarity with Object-Oriented Programming (OOP) concepts and languages such as Java, Kotlin, or similar (prior experience in other languages is welcomed and transferable). Exposure to scripting languages like JavaScript and its frameworks (or a willingness to learn them). Basic understanding of data structures and algorithms, with a focus on building problem-solving skills. Initial experience or academic knowledge in building backend services and APIs. Exposure to Agile/Scrum methodologies and an eagerness to collaborate within an iterative development process. Familiarity with cloud platforms like AWS, or an interest in learning about cloud infrastructure. A strong team player who collaborates effectively with peers, asks for help when needed, and is open to feedback. Demonstrates humility, empathy, and a respectful attitude towards colleagues and team dynamics We believe that competent engineers can adapt promptly with little help, so even if you don’t satisfy all the criteria but have a strong belief in your competencies, please do still apply! Our Tech Stack Toast’s products run on a stack that ranges from guest and restaurant-facing Android tablets to backend services in Java & Kotlin to internal, guest-facing and restaurant-facing web apps. Our backend services follow a microservice architecture written using Java and DropWizard, with services communicating in event-driven fashion using Apache Pulsar. We use Apache Camel for implementing Enterprise Integration Pattern. We use AWS extensively, ranging from S3 to DynamoDb to Lambda. We have our own platform for dealing with user management, service elevations and robust load balancing. Toast stores data in a set of sharded Postgres databases & DynamoDb, and utilizes Apache Spark for large scale data workloads including query and batch processing. The front-end is built primarily using React and ES6. The main Toast POS application is an Android application written in Java and Kotlin. For data between tablets and our cloud platform we operate RabbitMQ clusters as well as direct tablet communication to the back end. We use Datadog for system and application metric and monitoring. We use Splunk for log aggregation. Diversity, Equity, and Inclusion is Baked into our Recipe for Success At Toast, our employees are our secret ingredient—when they thrive, we thrive. The restaurant industry is one of the most diverse, and we embrace that diversity with authenticity, inclusivity, respect, and humility. By embedding these principles into our culture and design, we create equitable opportunities for all and raise the bar in delivering exceptional experiences. We Thrive Together We embrace a hybrid work model that fosters in-person collaboration while valuing individual needs. Our goal is to build a strong culture of connection as we work together to empower the restaurant community. To learn more about how we work globally and regionally, check out: https://careers.toasttab.com/locations-toast. Apply today! Toast is committed to creating an accessible and inclusive hiring process. As part of this commitment, we strive to provide reasonable accommodations for persons with disabilities to enable them to access the hiring process. If you need an accommodation to access the job application or interview process, please contact candidateaccommodations@toasttab.com. For roles in the United States, It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Show more Show less
Posted 4 weeks ago
4 - 9 years
9 - 14 Lacs
Bengaluru
Work from Office
Application Manager Bangalore, Karnataka, India We invent the new to help the world move forward. Combining powerful analytics and deeper insights with bigger ideas and innovative solutions, we free up our clients potential, thereby fulfilling our own. Take it seriously. Make it fun. Know it matters. Application Managers oversee technical teams within a Delivery Team and help to manage day-to-day tasks to ensure high levels of productivity, accuracy, and work priority. Application manager is responsible for the technical solution delivery and maintenance of the same in production. DISCOVER your opportunity What will your essential responsibilities include? Technically lead and manage Business Analysts and Developers including assignment of work. Assists Delivery Lead to manage SI Partners by helping to provide partner day to day direction on prioritization and decisions. Performs deliverable reviews and manage measurement of deliverable quality. Assists to maintain application standards like app certification, vendor management, release coordination, apply security standard etc. Act as liaison between SI partner team and stakeholders. Ensure technical team alignment with business expectations and delivery roadmap. Will liaise and consult with the Architecture team to ensure design alignment with AXA XLs architecture strategy. Provide technical SME assistance for the insurance billing and payment solutions (Ex. Guidewire, Majesco, SAP). Estimate work requests at various levels. Partners with Release Management to Coordinate Release Activities. Works with Operational Change Management team to ensure training materials and release notes are being delivered. Monitor and execute release and deployment activities. Ensure full compliance to AXA standards of the products for the business (incl. Security & Data Privacy). Act as liaison between SI Partner team and stakeholders. Solid experience working in an Agile environment. Assist in Coordinating and Participate in Agile Ceremonies as required. Monitor Agile ceremonies and activities to ensure compliance with Digital Factory standards.You will report to the Delivery Lead Claims. SHARE your talent SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: Relevant years of hands-on work experience with complex applications. Relevant years of experience working in an Agile environment. Proven experience in Microsoft technical suite like .NET (Core, Standard), SQL Server, C#, VB.NET, ASP.NET. Deep understanding of enterprise integration patterns (API, middleware and/or data migration, secured data flow) Cloud based experience with Azure. DevOps practices including CI/CD pipelines, infrastructure as code and containerization Proficient in use of JIRA, Confluence, Bitbucket, team city and Data dog. Timely and accurate completion of deliverables in a manner that is auditable, testable, and maintainable. Implementation consistent with solution design and business specifications. Ensure for technical integrity of changes made to systems. Adherence to development governance & SDLC standards. Team leadership abilities required, including experience leading and mentoring development professionals. Must be able to set priorities and multi-task. Prior work experience with Commercial Lines of Insurance. Desired Skills and Abilities Proficiency with multiple application delivery models including Agile, iterative and waterfall. Broad understanding of application development and support technologies. Prior work experience in an insurance or technology field preferred. Prior experience working with multiple vendor partners. Adaptable to new/different strategies, programs, technologies, practices, cultures, etc. Comfortable with change, able to easily make transitions. Bachelors degree in the field of computer science, information systems or a related field preferred.
Posted 4 weeks ago
7 - 12 years
0 Lacs
Pune, Maharashtra, India
On-site
Years of experience: 7-12 years Job Location: Pune The Senior Consultant is responsible for leading and delivering ADDM (Application Discovery and Dependency Mapping) project workstreams. They are the subject matter expert (SME) responsible for managing complex technical issues, collaborating with developers and engineers, and implementing long-term solutions within Device42's ecosystem. The SME works closely with other technology SMEs to build and validate accurate application affinity groups and associated business applications within the ADDM platform. The SME would possess in-depth knowledge of Device42's features, including ADDM, and be able to create and maintain ADDM configurations to meet client needs. Core Responsibilities: Technical Expertise: Demonstrates deep understanding and expertise in Device42's ADDM platform (but not limited to), including its features, capabilities, and limitations. Configures and maintains automated discovery jobs and associated scheduling. Problem Solving: Identifies, analyzes, and resolves complex technical issues related to ADDM platforms, often collaborating with other SMEs, Client resources, and vendors to remediate discovery job errors. Solution Implementation: Develops and implements long-term solutions to address recurring issues and improve the overall performance and stability of the ADDM platform. Participate in business development efforts for ADDM opportunities including assisting or leading POC s and demos, project scoping, and proposal development. Configuration and Maintenance: Creates, configures, and maintains ADDM platform configurations, automated discovery jobs, and associated scheduling to meet specific client requirements. Client Interaction: Works directly with clients to understand their needs, gather requirements, and ensure accurate development and validation of affinity groups and business applications within the ADDM platform. Reporting and Documentation: Generates standard and custom reports to support project deliverables and maintain detailed documentation of configurations and solutions. Knowledge Sharing: Serves as a subject matter expert, providing guidance and support to other team members and stakeholders, potentially participating in training and knowledge sharing initiatives. Project Leadership: May lead ADDM workstreams, including project planning, status reporting, coordination of Client and internal resources, and adherence to project timelines. Preferred skills: Experience with other platforms such as Flexera Foundation/Cloudscape, ServiceNow Discovery/MID, AWS Migration Evaluator, Microsoft Azure Migrate, and/or Google StratoZone is also highly preferred. Experience with other systems of record such as ServiceNow CMDB, LeanIX, Confluence, Jira, SolarWinds, Datadog, Dynatrace, etc. are also important skills used in the delivery of ADDM services. Knowledge of ITSM and ITOM processes including CMDB. Understanding of authentication technologies such as Active Directory, LDAP, TACACS, CyberArk, etc. Basic understanding of Linux/Unix shell commands. Understanding of networking topics including IP Addressing, TCP/UDP Ports, TCP/IP Protocols, load balancers, firewalls, NAT, etc. Ability to troubleshoot a network connection by using ping, traceroute, netstat and other commands. Understanding of common application architectures and components (database, middleware, web, etc.) Basic understanding of IaaS/PaaS cloud computing. Experience and/or familiarity with AWS, Azure, Google are a plus. Ability to manipulate and analyze large data sets in Microsoft Excel and Microsoft PowerBI for data visualizations. Ability to communicate technical concepts to business leaders using verbal communication and visual aids such as PowerPoint or PowerBI. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
SystemsPlus is hiring for a Principal SRE, Exp : 10 to 15 yr. Location : Pune Hybrid. Client's Direct-to-Consumer Engineering team is responsible for creating, maintaining and providing customer service for its branded eCommerce websites. We seek talented individuals that fit into our team-oriented atmosphere and are proud to have an environment that offers the comfort of a true work/life balance. The Principal Site Reliability Engineer will play a lead role in the production environment by monitoring availability and taking a holistic view of system health. They will build software and systems to manage platform infrastructure and applications; improve reliability, quality, and time-to-market of our suite of software solutions; and measure and optimize system performance - all with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve. Responsibilities · Ensure availability, latency, performance, and efficiency of our global ecomm sites · Experience driving change management and incident management · Promote best practices and innovative observability to guide product delivery teams in achieving operational excellence for new product deliveries. · Drive operational excellence and evangelize best practices in observability. · Develop unified observability dashboards and implement E2E observability requirements. · Design innovative observability solutions for internal and external stakeholders. · Contribute to observability instrumentation standards and create repeatable patterns for engineering teams. · Define and implement E2E observability requirements and lead teams to support E2E best practices. · Collaborate with cross-functional teams to achieve objectives and drive high reliability into systems. · Build proprietary tools to mitigate weaknesses in incident management or software delivery. · Implement SRE best practices to increase system reliability and performance. · Automate processes for improved collaborative response and prepare teams for incidents. · Maintain error budgets, meet SLOs, and support uptime and availability of critical platform components. · Automate technology stacks to improve operating costs while responding to traffic spikes. · Location: Pune – Client Office, Mandatory in person – Tu, We, Thu in a week · Work timings: First 3 months in EST to onboarding ramp up, move into IST work timings for 8 hours with a possible 1 hour overlap in the evening with US team in EST (10am to 7pm) Required Skills and Experience: · Bachelor's Degree in Computer Science, Information Science, Engineering, or a related field. · 10+ years of experience in code management, deployment processes, procedures, and tools in a DevOps or SRE role. · Experience with monitoring tools (preferred: Dynatrace, Splunk, Datadog, Grafana, and New Relic). · Proficiency in state-of-the-art observability trends, tools, products, and technologies. · Ability to identify organization-wide gaps in the SRE practice and implement solutions that contribute to organizational transformation. · Experience driving cross-organization adoption of new technologies or initiatives. · Ability to influence senior management in selecting the right strategy, processes, and structures to transform the organization into a modern SRE team. · Proactive in identifying performance bottlenecks, anomalous system behavior, and addressing root causes of service issues. · Passionate about technology with a strong sense of curiosity and a desire to improve processes, automate everything, and continuously learn. · Successful experience supporting a cloud production environment (strong preference for Azure). · Competency in one or more programming languages for automation (Python strongly preferred). · Knowledge of cloud deployment tools and methodologies (ideally Ansible, but Terraform, Azure DevOps, etc. are also considered). · Deep understanding of Kubernetes and Docker architecture and associated tools. · Experience with at least one configuration management solution (e.g., Chef, Ansible, AWS CodeDeploy). · Proficiency with repository and pipeline-related tools (e.g., GitLab, Jenkins, Bamboo, Travis, CircleCI). · Experience with implementing and using various application and infrastructure monitoring tools. · Strong troubleshooting skills. · Ability to take ownership and deliver solutions autonomously. Interested candidate drop CV on vandana.jha@systems-plus.com Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
New Delhi, Delhi, India
On-site
At AlgoSec, What you do matters! Over 2,200 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks. Join our global team, securing application connectivity, anywhere. AlgoSec is seeking for Site Reliability Engineer for the SRE team in India. Reporting to: Head of SRE Location : Gurgaon, India Direct Employment Responsibilities Ensure the reliability, scalability, and performance of our company's production environment, including complex architecture with multiple servers, deployment & various cloud technologies. Ability to collaborate with cross-functional teams, work independently, and prioritize effectively in a fast-paced environment. Effectively oversee and enhance monitoring capabilities for production environment and ensure optimal performance and functionality across the technology stack. Demonstrates flexibility to support our 24/7 operations and is willing to participate in on-call rotations to ensure timely incident response and resolution. Effectively address and resolve unexpected service issues while also creating and implementing tools and automation measures to proactively mitigate the likelihood of future problems. Requirements Minimum 5 years of experience in SRE/DevOps position for SaaS based products. Experience in managing mission critical production environment. Experience on version control tools like GIT, Bitbucket, etc. Experience in establishing CI/CD procedures with Jenkins. Working knowledge of databases. Experience in effectively managing AWS infrastructure, demonstrating proficiency across multiple AWS Cloud services including networking, EC2, VPC, EKS, ELB/NLB, API GW, Cognito, and more. Experience in monitoring tools like Datadog, ELK, Prometheus and Grafana, etc. Experience in understanding and managing Linux infrastructure. Experience in bash or python. Experience with IaC like CloudFormation / CDK / Terraforms Experience in Kubernetes and container management. Possesses excellent written and verbal communication skills in English, allowing for effective and articulate correspondence. Demonstrates strong teamwork, maintains a positive demeanor, and upholds a high level of integrity. Exhibits exceptional organizational abilities, displays thorough attention to detail, and remains highly committed to tasks at hand. Displays sharp intellect, adeptness at picking up new information quickly, and is highly self-motivated. Advantages Additional cloud services knowledge (Azure, GCP, etc.) Understanding of Java, Maven, NodeJS based applications. Experience in serverless architecture AlgoSec is an Equal Opportunity Employer (EEO), committed to creating a friendly, inclusive environment that is a pleasure to work in, and where there is an unbiased acceptance of others. AlgoSec believes that diversity and an inclusive company culture are key drivers of creativity, innovation and performance. Furthermore, a diverse workforce and the maintenance of an atmosphere that welcomes versatile perspectives will enhance our ability to fulfill our vision. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
OUR STORY Quince was started to challenge the existing idea that nice things should cost a lot. Our mission was simple: create an item of equal or greater quality than the leading luxury brands and sell them at a much lower price. OUR VALUES Customer First. Customer satisfaction is our highest priority. High Quality. True quality is a combination of premium materials and high production standards that everyone can feel good about. Essential design. We don't chase trends, and we don't sell everything. We're expert curators that find the very best and bring it to you at the lowest prices. Always a better deal. Through innovation and real price transparency we want to offer the best deal to both our customers and our factory partners. Environmentally and Socially conscious. We're committed to sustainable materials and sustainable production methods. That means a cleaner environment and fair wages for factory workers. OUR TEAM AND SUCCESS Quince is a retail and technology company co-founded by a team that has extensive experience in retail, technology and building early stage companies. You'll work with a team of world-class talent from Stanford GSB, Google, D.E. Shaw, Stitch Fix, Urban Outfitters, Wayfair, McKinsey, Nike etc. Responsibilities Technical Leadership Architect, design, and implement scalable and reusable test automation frameworks for UI, API, and performance testing. Drive shift-left testing by integrating automated tests early in the SDLC. Optimize existing automation frameworks for faster execution, stability, and reliability. Ensure comprehensive test coverage across functional, integration, regression, performance, and security testing layers. Establish best practices in test automation using modern tools and frameworks. Review and enhance CI/CD pipelines to include automated testing, test reporting, and quality gates. Provide technical mentorship to SDET-1 and SDET-2 team members, guiding them on automation, testing best practices, and debugging complex issues. Collaboration & Stakeholder Engagement Work closely with developers, DevOps, and product managers to define test plans, strategies, and quality metrics. Collaborate with development teams to implement unit and integration tests. Drive defect triage and resolution processes, ensuring timely identification and fixes. Partner with DevOps to enhance test execution in CI/CD pipelines. Advocate for quality-first development practices across engineering teams. Automation & Tooling Develop robust test scripts in Java, Python, or JavaScript using frameworks like Selenium, Cypress, Appium, or Playwright. Implement API automation testing using tools such as RestAssured, Postman, or Karate. Lead efforts in performance testing using JMeter, Gatling, or Locust. Ensure security testing is embedded within test pipelines. Maintain test execution reports, dashboards, and key quality metrics using tools like Allure, TestRail, and Datadog. Process & Best Practices Define and enforce test-driven development (TDD) and behavior-driven development (BDD) methodologies. Improve test data management strategies for stable and repeatable test execution. Introduce mocking and service virtualization where applicable. Conduct code reviews for automation scripts and provide constructive feedback. Lead test strategy discussions for microservices and distributed systems testing. Requirements 7-10 years of experience in software development, testing, and automation. Strong proficiency in one or more programming languages (Java, Python, JavaScript). Experience with test automation frameworks like Selenium, Appium, Cypress, TestNG, JUnit, Playwright. Solid knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Bamboo, or CircleCI). Hands-on experience with containerized environments (Docker, Kubernetes). Experience in testing microservices and RESTful APIs. Good understanding of AWS, GCP, or Azure for cloud-based testing. Exposure to performance and security testing methodologies. Strong analytical and problem-solving skills. Excellent communication and stakeholder management skills. Quince provides equal employment opportunities to all employees and applications for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran or military status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Security Advisory: Beware of Frauds At Quince, we're dedicated to recruiting top talent who share our drive for innovation. To safeguard candidates, Quince emphasizes legitimate recruitment practices. Initial communication is primarily via official Quince email addresses and LinkedIn; beware of deviations. Personal data and sensitive information will not be solicited during the application phase. Interviews are conducted via phone, in person, or through the approved platforms Google Meets or Zoom—never via messaging apps or other calling services. Offers are merit-based, communicated verbally, and followed up in writing. If personal information is requested to initiate the hiring process, rest assured it will be through secure and protected means. Show more Show less
Posted 4 weeks ago
0.0 - 5.0 years
0 Lacs
Musheerabad, Hyderabad, Telangana
On-site
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 01/06/2025
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.
These cities are known for their thriving tech industries and are actively hiring for Datadog roles.
The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.
In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.
With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2