Jobs
Interviews

2943 Datadog Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to provide significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup etc) as well as managing its day-to-day operations Job Summary: PostgreSQL DBA to join our team. The ideal candidate will have 8years of hands-on experience in managing and maintaining PostgreSQL databases. You will be responsible for ensuring optimal performance, availability, and security of our database systems while working closely with development and operations teams to provide seamless support for ongoing projects. Key Responsibilities: - Manage and maintain PostgreSQL databases, ensuring their high availability and performance. - Perform regular database tuning and performance optimization. - Monitor and manage database security, including backups, disaster recovery plans, and user roles. - Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. - Conduct regular maintenance activities such as indexing, vacuuming, and upgrades to newer PostgreSQL versions. - Automate routine database tasks using scripts and tools. - Design and implement database backup and recovery solutions. - Work closely with developers to optimize SQL queries and ensure efficient schema design. - Ensure data integrity and implement appropriate data retention policies. - Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). - Provide on-call support for critical database operations and incidents. Requirements: - Bachelors degree in Computer Science, Information Technology, or related field. - 7+years of experience as a PostgreSQL DBA in a production environment. - Strong knowledge of PostgreSQL architecture, replication, and clustering. - Proficiency in performance tuning, query optimization, and index management. - Hands-on experience with backup, recovery, and database security practices. - Familiarity with database monitoring tools (e.g., pgAdmin, Nagios, Datadog). - Experience in scripting languages (e.g., Bash, Python) for automation tasks. - Knowledge of cloud environments such as AWS, Google Cloud, or Azure. - Strong troubleshooting and problem-solving skills. - Excellent communication and collaboration skills. Preferred Skills: - Experience with database migration and upgrades. - Familiarity with NoSQL databases or other relational database systems. Mandatory Skills: PostgreSQL Database Admin Experience : 5-8 Years.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 9 Lacs

Noida, Delhi / NCR

Hybrid

Experience in of cloud infrastructure, virtualization (VMware), and hybrid environments. Web server configurations, deployments, and troubleshooting. Familiarity with monitoring tools (DataDog, Grafana, Splunk) and alert management.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Backend Engineer Location: Gurugram Experience: 4-9 Years CTC: Up to ₹ 35LPA Overview : We are hiring for our esteemed client, a Series-A funded deep-tech company developing a revolutionary app-based operating system for Computer Vision. The team is driving innovation in real-time video analytics and large-scale data processing, delivering robust and scalable backend systems that support AI-powered solutions globally. Key Responsibilities: Design and build scalable, fault-tolerant backend systems using Golang or Node.js. Architect and implement microservices-based systems ensuring high performance and availability. Work across multiple databases: SQL, NoSQL, PostgreSQL, MongoDB, ClickHouse, etc. Integrate service communication via message queues like RabbitMQ and Kafka. Develop core backend services including APIs (REST/GraphQL), caching (Redis), deployment automation, and CI/CD workflows. Optimize system performance through intelligent caching, load balancing, and observability tools (New Relic, Datadog). Collaborate with cross-functional teams to deliver production-ready features and ensure secure, reliable systems. Required Skills & Qualifications: 4+ years of backend development experience. Strong expertise in Node.js, Golang, and/or C++. Experience in migrating monolithic systems to microservices architecture. Proficiency in distributed systems, message queues (Kafka, RabbitMQ), and caching layers (Redis, Memcached). Solid understanding of SQL and NoSQL databases. Familiarity with APM tools (New Relic, Datadog) and CI/CD workflows. Prior experience in fast-paced, product-based startup environments is a plus. Bonus: Knowledge of DevOps tooling and on-premise setups.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Bengaluru

Hybrid

AWS Site Reliability Engineer (5 to 12 Years) Required Skills & Experience Cloud Services (AWS) Hands-on experience with the following AWS services: EC2 (Elastic Compute Cloud) EKS (Elastic Kubernetes Service) SES (Simple Email Service) SQS (Simple Queue Service) SNS (Simple Notification Service) S3 (Simple Storage Service) DynamoDB RDS / Aurora (Relational Database Service) OpenSearch (formerly Elasticsearch Service) Elasticache Security Groups CloudWatch High-level knowledge of AWS networking concepts, including VPCs and Subnets. Hands-on experience with: Datadog (Monitoring and Observability) GitLab (CI/CD, Version Control) GitHub (Version Control) PagerDuty (On-call Management) BlazeMeter (Performance Testing) K9s (Kubernetes CLI) Technical Skills High Proficiency Terraform for Infrastructure as Code (IaC). Strong scripting abilities in Bash and Python. Familiarity with Go programming language Expertise in using AWS CLI. Proficiency with Kubectl for Kubernetes cluster management. Experience with Helm for Kubernetes package management.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Delhi, India

Remote

Overview WELCOME TO SITA We're the team that keeps airports moving, airlines flying smoothly, and borders open. Our tech and communication innovations are the secret behind the success of the world’s air travel industry. You'll find us at 95% of international hubs. We partner closely with over 2,500 transportation and government clients, each with their own unique needs and challenges. Our goal is to find fresh solutions and cutting-edge tech to make their operations run like clockwork. Want to be a part of something big? Are you ready to love your job? The adventure begins right here, with you, at SITA. About The Role & Team We are looking for a Monitoring and DevOps Tools Specialist to join our E&T Integrated System Support Engineering(ISSE) team to implement and maintain end-to-end monitoring solutions and administering and supporting DevOps tools to ensure system reliability, performance, and operational efficiency. You will be based in our Gurugram office , as well as gain international exposure interacting with people across the company in Geneva, London, Barcelona, Montreal, Paris, Dubai, Singapore. Wait. You might wonder: “What do we do at SITA FOR AIRCRAFT?” We make flight operations, air traffic management and aircraft maintenance more sustainable and efficient. How? Enabling collaboration between people and organizations in the air transport industry through: Communication - connecting aircraft and people around the world Data & Platform - Turning aircraft data into valuable insight for the entire industry Applications - Empowering the industry with user-friendly tools that make flight operations more sustainable and efficient What You’ll Do You will get to develop, step by step and over the months, on the following type of activities: Participate in the planning and implementation of new Tools and it’s Lifecycle Maintenance. Help build a DevOps culture across the organization and reliability. Learn and apply IT best practices, security policies, and compliance standards relevant to systems monitoring and operations. Apply automation and software to any tasks or parts of the system that would benefit from it or are performed manually. Collaborate with cross-functional teams to ensure successful deployment and integration of DevOps tools. Qualifications ABOUT YOUR SKILLS Good Understanding of tools such as Git, Jira, Confluence, Jenkins/Bamboo/Gitlab etc Understanding of Linux operating systems and scripting languages such as Bash and Python Theoretical or Hands-on experience in System Administration activities. Wiling to explore latest DevOps Tools, proof of concept implementation and technical documentation. Participate in DevOps Tools upgrades, installation, and life cycle maintenance. Familiarity with Infrastructure as Code tools like Terraform is a plus. Comfortable using cloud-related technologies such as Azure/AWS/GCP (Preferably Azure) Familiar with basic administration and usage of monitoring tools such as New Relic, Nagios, Prometheus, Grafana, and SolarWinds, with a willingness to learn and grow expertise Good knowledge in managing ServiceNow tickets for incident, problem, and change management, with an understanding of SLA-driven resolution processes. Basic programming skills and scripting skills are required. Familiarity with config management tool like Ansible/Puppet. A graduate in Computer Science or related discipline and 5+ years of relevant experience in software and/or System Administration stream. Attention to detail, working in an environment where precision and accuracy is required. Problem solving skills - ability to follow problems through to resolution. Good organizational skills, ability to multi-task with good management skills. NICE-TO-HAVE Familiar with CI/CD pipeline automation and deployment frameworks such as Docker, Kubernetes Exposure to other monitoring tools such as Elastic, Logstash, Kibana, Dynatrace, Datadog, and observability methodologies. Any past experience in Aviation domain is desirable. What We Offer We’re all about diversity. We operate in 200 countries and speak 60 different languages and cultures. We’re really proud of our inclusive environment. Our offices are comfortable and fun places to work, and we make sure you get to work from home too. Find out what it's like to join our team and take a step closer to your best life ever. 🏡 Flex Week: Work from home up to 2 days/week (depending on your team’s needs) ⏰ Flex Day: Make your workday suit your life and plans. 🌎 Flex Location: Take up to 30 days a year to work from any location in the world. 🌿 Employee Wellbeing: We’ve got you covered with our Employee Assistance Program (EAP), for you and your dependents 24/7, 365 days/year. We also offer Champion Health - a personalized platform that supports a range of wellbeing needs. 🚀 Professional Development : Level up your skills with our training platforms, including LinkedIn Learning! 🙌🏽 Competitive Benefits : Competitive benefits that make sense with both your local market and employment status. SITA is an Equal Opportunity Employer. We value a diverse workforce. In support of our Employment Equity Program, we encourage women, aboriginal people, members of visible minorities, and/or persons with disabilities to apply and self-identify in the application process.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do Docusign is seeking a talented and results oriented Data Engineer to focus on delivering trusted data to the business. As a member of the Global Data Analytics (GDA) Team, the Data Engineer leverages a variety of technologies to design, develop and deliver new features in addition to loading, transforming and preparing data sets of all shapes and sizes for teams around the world. During a typical day, the Engineer will spend time developing new features to analyze data, develop solutions and load tested data sets into the Snowflake Enterprise Data Warehouse. The ideal candidate will demonstrate a positive “can do” attitude, a passion for learning and growing, and the drive to work hard and get the job done in a timely fashion. This individual contributor position provides plenty of room to grow -- a mix of challenging assignments, a chance to work with a world-class team, and the opportunity to use innovative technologies such as AWS, Snowflake, dbt, Airflow and Matillion. This is an individual contributor role reporting to the Manager, Data Engineering. Responsibility Design, develop and maintain scalable and efficient data pipelines Analyze and Develop data quality and validation procedures Work with stakeholders to understand the data requirements and provide solutions Troubleshoot and resolve data issues on time Learn and leverage available AI tools for increased developer productivity Collaborate with cross-functional teams to ingest data from various sources Continuously evaluate and improve data architecture and processes Own, monitor, and improve solutions to ensure SLAs are met Develop and maintain documentation for Data infrastructure and processes Executes projects using Agile Scrum methodologies and be a team player Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor’s Degree in Computer Science, Data Analytics, Information Systems, etc Experience developing data pipelines in one of the following languages: Python or Java 5+ years dimensional and relational data modeling experience Preferred 5+ years in data warehouse engineering (OLAP) Snowflake, Teradata etc 5+ years with transactional databases (OLTP) Oracle, SQL Server, MySQL 5+ years with commercial ETL tools - DBT, Matillion etc 5+ years delivering ETL solutions from source systems, databases, APIs, flat-files, JSON Experience developing Entity Relationship Diagrams with Erwin, SQLDBM, or equivalent Experience working with job scheduling and monitoring systems (Airflow, Datadog, AWS SNS) Familiarity with Gen AI tools like Git Copilot and dbt copilot. Good understanding of Gen AI Application frameworks. Knowledge on any agentic platforms Experience building BI Dashboards with tools like Tableau Experience in the financial domain, sales and marketing, accounts payable, accounts receivable, invoicing Experience managing work assignments using tools like Jira and Confluence Experience with Scrum/Agile methodologies Ability to work independently and as part of a team Excellent analytical and problem solving and communication skills Excellent SQL and database management skills Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice

Posted 2 weeks ago

Apply

7.0 years

12 - 18 Lacs

Jaipur, Rajasthan, India

On-site

About The Role We are seeking a highly skilled Lead Software Developer with a strong background in SaaS product development and expertise in designing scalable, secure, and resilient systems. You will lead the technical architecture of our cloud-native platforms, manage cross-functional engineering efforts, and implement best practices across backend, frontend, and infrastructure layers. This is a leadership role based on-site in Jaipur and ideal for someone who enjoys solving complex system challenges and building high-impact software. Key Responsibilities Lead the design and implementation of scalable SaaS application architecture Architect and develop solutions using Laravel, NestJS, Node.js, and AWS Build and manage microservices and serverless infrastructure for high availability and maintainability Implement and manage asynchronous and parallel request handling for optimized system performance Design and scale load handling and queue management systems using BullMQ and Redis Develop and integrate real-time data workflows using webhooks and external APIs Configure and manage Node.js clusters for performance and fault tolerance Collaborate with frontend developers working on React and Vue.js to deliver seamless UI/backend integration Ensure code quality, observability, and maintainability across services Monitor, debug, and resolve issues in production using logs and performance tracking tools Drive CI/CD implementation, infrastructure automation, and DevOps practices Required Skills & Qualifications 7+ years of experience in full-stack or backend software development Proven experience as a Lead Developer on SaaS products Proficient in Laravel, NestJS, and Node.js Deep understanding of BullMQ, Redis, and message/queue-based architectures Experience with asynchronous processing, parallel request handling, and load distribution Proficient in microservices and serverless architecture patterns Strong command of AWS (EC2, Lambda, RDS, S3, API Gateway, IAM, CloudWatch) Hands-on experience with webhooks, job queues, background tasks, and rate-limiting strategies Familiarity with Node.js cluster mode and scaling strategies Solid experience in debugging, tracing, and performance monitoring Frontend integration experience with React or Vue.js (please specify your exposure) Strong leadership, collaboration, and mentoring abilities Preferred Skills (Nice To Have) Experience with Docker, Kubernetes, and CI/CD pipelines Exposure to event-driven architecture and multi-tenant SaaS platforms Familiarity with observability tools such as Grafana, Datadog, or New Relic Skills: saas,typescript,bullmq,mvc,frontend integration with react or vue.js,aws,kubernetes,redis,serverless architecture,asynchronous processing,graphql,docker,ci/cd,rabbitmq,nestjs,php,javascript,msql,microservices,node.js,laravel

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

About The Role We are seeking a highly skilled Lead Software Developer with a strong background in SaaS product development and expertise in designing scalable, secure, and resilient systems. You will lead the technical architecture of our cloud-native platforms, manage cross-functional engineering efforts, and implement best practices across backend, frontend, and infrastructure layers. This is a leadership role based on-site in Jaipur and ideal for someone who enjoys solving complex system challenges and building high-impact software. Key Responsibilities Lead the design and implementation of scalable SaaS application architecture Architect and develop solutions using Laravel, NestJS, Node.js, and AWS Build and manage microservices and serverless infrastructure for high availability and maintainability Implement and manage asynchronous and parallel request handling for optimized system performance Design and scale load handling and queue management systems using BullMQ and Redis Develop and integrate real-time data workflows using webhooks and external APIs Configure and manage Node.js clusters for performance and fault tolerance Collaborate with frontend developers working on React and Vue.js to deliver seamless UI/backend integration Ensure code quality, observability, and maintainability across services Monitor, debug, and resolve issues in production using logs and performance tracking tools Drive CI/CD implementation, infrastructure automation, and DevOps practices Required Skills & Qualifications 7+ years of experience in full-stack or backend software development Proven experience as a Lead Developer on SaaS products Proficient in Laravel, NestJS, and Node.js Deep understanding of BullMQ, Redis, and message/queue-based architectures Experience with asynchronous processing, parallel request handling, and load distribution Proficient in microservices and serverless architecture patterns Strong command of AWS (EC2, Lambda, RDS, S3, API Gateway, IAM, CloudWatch) Hands-on experience with webhooks, job queues, background tasks, and rate-limiting strategies Familiarity with Node.js cluster mode and scaling strategies Solid experience in debugging, tracing, and performance monitoring Frontend integration experience with React or Vue.js (please specify your exposure) Strong leadership, collaboration, and mentoring abilities Preferred Skills (Nice To Have) Experience with Docker, Kubernetes, and CI/CD pipelines Exposure to event-driven architecture and multi-tenant SaaS platforms Familiarity with observability tools such as Grafana, Datadog, or New Relic Skills: parallel request handling,kubernetes,mvc,redis,php,nestjs,node.js,ci/cd,laravel,rabbitmq,performance monitoring,saas,asynchronous processing,microservices,vue.js,javascript,react,msql,bullmq,aws,typescript,serverless architecture,graphql,docker

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role As a Senior DevOps Engineer, you will be part of a highly talented DevOps team who manages the entire infrastructure for Yubi. You will work with development teams to understand their requirements, optimize them to reduce costs, create scripts for creating and configuring them, maintain and monitor the infrastructure. As a financial services firm, security is of utmost concern to our firm and you will ensure that all data handled by the entire platform, key configurations, passwords etc. are secure from leaks. You will ensure that the platform is scaled to meet our user needs and optimally performing at all times and our users get a world class experience using our software products. You will ensure that data, source code and configurations are adequately backed up and prevent loss of data. You will be well versed in tools to automate all such DevOps tasks. Responsibilities Troubleshoot web and backend applications and issues. Good understanding on multi-tier applications. Knowledge on AWS security, Application security, security best practices. SCA analysis, analyzing the security reports, sonarqube profiles and gates. Able to draft solutions to improve security based on reporting. Lead, drive and implement highly scalable, highly available and complex solutions. Up to date with latest devops tools and ecosystem. Excellent written and verbal communication. Requirements Bachelor’s/Master’s degree in Computer Science or equivalent work experience 3-6 years of working experience as DevOps engineer AWS Cloud expertise is a must and primary. Azure/GCP cloud knowledge is a plus. Extensive knowledge and experience with major AWS Services. Advanced AWS networking setup, routing, vpn, cross account networking, use of proxies. Experience with AWS multi account infrastructure. Infrastructure as code using cloudformation or terraform. Containerization – docker/kubernetes/ecs/fargate. Configuration management tools such as chef/ansible/salt. CI/CD - Jenkins/Code pipeline/Code deploy. Basic expertise in scripting languages such as shell, python or nodejs. Adept at Continuous Integration/Continuous Deployment Experience working with source code repos like Gitlab, Github or Bitbucket. Monitoring tools: cloudwatch agent, prometheus, grafana, newrelic, Dynatrace, datadog, openapm..etc ELK knowledge is a plus. Knowledge on chat-ops. Adept at using various operating systems, Windows, Mac and Linux Expertise in using command line tools, AWS CLI, Git, or other programming aws apis. Experience with both sql (rds postgres/mysql) and no-sql databases (mongo), data warehousing (redshift), datalake. Knowledge and experience in instrumentation, metrics and monitoring concepts. Benefits YUBI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role As a Senior DevOps Engineer, you will be part of a highly talented DevOps team who manages the entire infrastructure for Yubi. You will work with development teams to understand their requirements, optimize them to reduce costs, create scripts for creating and configuring them, maintain and monitor the infrastructure. As a financial services firm, security is of utmost concern to our firm and you will ensure that all data handled by the entire platform, key configurations, passwords etc. are secure from leaks. You will ensure that the platform is scaled to meet our user needs and optimally performing at all times and our users get a world class experience using our software products. You will ensure that data, source code and configurations are adequately backed up and prevent loss of data. You will be well versed in tools to automate all such DevOps tasks. Responsibilities Troubleshoot web and backend applications and issues. Good understanding on multi-tier applications. Knowledge on AWS security, Application security, security best practices. SCA analysis, analyzing the security reports, sonarqube profiles and gates. Able to draft solutions to improve security based on reporting. Lead, drive and implement highly scalable, highly available and complex solutions. Up to date with latest devops tools and ecosystem. Excellent written and verbal communication. Requirements Bachelor’s/Master’s degree in Computer Science or equivalent work experience 3-6 years of working experience as DevOps engineer AWS Cloud expertise is a must and primary. Azure/GCP cloud knowledge is a plus. Extensive knowledge and experience with major AWS Services. Advanced AWS networking setup, routing, vpn, cross account networking, use of proxies. Experience with AWS multi account infrastructure. Infrastructure as code using cloudformation or terraform. Containerization – docker/kubernetes/ecs/fargate. Configuration management tools such as chef/ansible/salt. CI/CD - Jenkins/Code pipeline/Code deploy. Basic expertise in scripting languages such as shell, python or nodejs. Adept at Continuous Integration/Continuous Deployment Experience working with source code repos like Gitlab, Github or Bitbucket. Monitoring tools: cloudwatch agent, prometheus, grafana, newrelic, Dynatrace, datadog, openapm..etc ELK knowledge is a plus. Knowledge on chat-ops. Adept at using various operating systems, Windows, Mac and Linux Expertise in using command line tools, AWS CLI, Git, or other programming aws apis. Experience with both sql (rds postgres/mysql) and no-sql databases (mongo), data warehousing (redshift), datalake. Knowledge and experience in instrumentation, metrics and monitoring concepts. Benefits YUBI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

5 - 10 Lacs

Hyderabad

Remote

Title: Senior Engineer Exp- 6+ yrs Job mode- Remote Job Type- C2C Budget- 5 to 10 LPA Job Description- Key Responsibilities: Work effectively as part of a project team alongside the Product Owner, Scrum master and other team members Prioritise a deep understanding of the importance and principles of engineering excellence and demonstrating this knowledge in your work Write clean code in line with the teams set standards. Look for ways to improve your team’s coding standards. Own, scope and deliver well defined deliverables or stories. Communicate and update your progress regularly at stand-ups or similar agile events and ceremonies. Deliver and maintain software products conforming to the agreed specifications and Enterprise quality standards & guardrails Support, monitor, and maintain production grade systems including utilising observability tooling and issue remediation. Collaborate closely and cooperatively with your technical and non-technical teams to work towards the best solution that maximises value to the customer Contribute to a culture of code quality and implement automated, unit and integration testing as part of the software development lifecycle. Apply good security processes such as threat modelling to the code you develop Grow your knowledge of architecture, modern engineering principles and design patterns Implement your team’s approach to delivering high quality, tested code. Maintain and improve CI/CD pipelines. Play a role in code reviews and actively review pull requests from other team members Produce software technical specifications and other documentation as required for development solutions Maintain good working relationships with colleagues, vendors and customers of the department Skills and Experience: Experience (required)in building API products and API management e.g. Apigee. Including API versioning, documentation, and developer onboarding experience Experience (required) in AWS Serverless solution design & Event Driven integration patterns Experience (required) of working in the development of AWS cloud native solutions. . Experience of working with DevOps tools such as Jenkins, Bamboo, Git, or similar, for deployment purposes . Experience of various database paradigms including SQL & NoSQL . Solid understanding of security protocols and standards . Experience (required) with backend / compute languages delivering business value such as Typescript . Experience in Automated Testing principles Deep understanding of the importance and principles of engineering excellence and demonstrating this knowledge in your work. Experience of feature or function design and delivery as part of an agile software development team (Scrum, Kanban, XP, etc.). Experience of working with Product Owners, customers, end-users, or stakeholders in the delivery of software, solutions, or products. Skills and Experience (desirable) Have experience in integration design, development & delivery. Have experience in Infrastructure as Code (AWS CDK (ideally), Terraform etc) Have experience in supporting, monitoring, and maintaining production grade systems: . Investigation via observability tooling e.g. Splunk, Datadog, AWS tooling.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Must have expertise of Windows Server, primarily IIS server administration. AWS's cloud services include EC2, RDS, Cloud Watch, Cloud Trial, Load Balancer, RDS, Athena, S3, Route53, Storage Gateway, and Code Commit. Must manage several accounts in AWS and multiple environments. Must have knowledge of tools. Jenkins, DataDog, and Jira. Added benefit if they know the PowerShell script We need a Technical lead/senior developer (T2) level for this position Should be ready to work in shifts as we have 2 shifts on a daily basis. This is a support enhancement project. This will be on a rota basis. However there re no night shifts here. Since this is a support project, there will be no leaves, except for Pongal, Diwali, Christmas and New Year. Should be able to independently handle the work, as this is only a 3 member team. Should be able to liaise with the client and get the work done on a daily basis Should have good communication & articulation skills Preferable candidates in Chennai location, as the team sits together. If not, India, last option would be CMB. Pls refer below for technical skills.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Cochin

On-site

Job Title : Senior QA Engineer (Automation & Functional Testing) Location :Kochi Experience : 4-5 Years Job Summary We are looking for a Senior QA Engineer to lead the quality assurance efforts for enterprise-scale applications. This role involves test planning, execution, automation, and process improvements to ensure high-quality deliverables in an Agile environment. Responsibilities 1)Lead test strategy, planning, and execution across multiple sprints and releases 2)Build and manage a robust regression and automation suite across CI/CD pipelines 3)Create and maintain clear QA documentation, user flows, and coverage reports 4)Actively participate in backlog grooming, sprint planning, and design discussions 5)Coordinate bug triage with PMs, designers, and developers 6)Define and track quality KPIs (bug escape rate, test ROI, post-prod defects) 7)Mentor junior QAs and evangelize best practices across teams 8)Drive continuous improvement initiatives (e.g., flaky test triage, data mocks, usability testing) 9)Act as the QA voice in ensuring that customer experience and edge cases are not missed Required Skills 1)4 to 5 years in QA or test engineering, preferably in fast-paced environments 2)Strong foundation in functional, regression, API, UI/UX, and exploratory testing 3)Hands-on with test automation tools like Cypress, Playwright, Appium, or similar 4)Experience writing test plans and cases tied to business or sprint goals 5)Excellent documentation habits and attention to detail 6)Ability to prioritize based on risk and release urgency 7)Comfortable pushing back on timelines when quality is at risk 8)Exposure to mobile/web test infrastructure and backend validations 9)Proactive communicator with cross-functional stakeholders Good-to-Have Skills : 1)Experience with tools like TestRail, Zephyr, BrowserStack, Jira, Postman 2)Familiarity with monitoring tools (e.g., Sentry, Datadog) for post-release validation 3)Experience testing GraphQL APIs and microservices-based architectures 4)Background in usability testing or product instrumentation for feedback loops 5)Exposure to load, performance, or security testing frameworks Success in this Role Looks Like : 1)No critical bugs escaping to production 2)QA confidence reports and checklists that guide decision-making 3)Documentation that lives and breathes with the product 4)Collaboration with PMs and designers to flag usability gaps early 5)Tight alignment with sprint and quarterly release goals 6)Mentorship and delegation within the QA team Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹60,000.00 per month Work Location: In person Application Deadline: 03/08/2025 Expected Start Date: 06/08/2025

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Associate Vice President - SRE, Digital Business Job Title: Associate Vice President - SRE, Digital Business Location: Mumbai, Bengaluru About Us Sonyliv is a leading OTT platform revolutionizing the way audiences consume entertainment. With millions of users across the globe, our mission is to deliver seamless, high-quality, and reliable streaming experiences. We are looking for a Principal Site Reliability Engineer (SRE) to join our team and take ownership of ensuring the availability, scalability, and performance of our critical systems. Job Summary As a Principal SRE Engineer, you will be responsible for designing, building, and maintaining reliable and scalable infrastructure to support our OTT platform. You bring a developer's mindset, coupled with extensive SRE experience, and a passion for reliability and performance. You'll ensure smooth system operations, take ownership of application and infrastructure reliability, and have a strong support mindset to tackle critical incidents, even during off-hours. We're seeking a candidate with 8+ years of experience, a deep understanding of observability, and the ability to lead reliability initiatives across systems and teams. Key Responsibilities Full System Ownership: Take complete responsibility for the availability, reliability, and performance of systems, including both application and infrastructure layers. Development & SRE Mindset: Leverage your experience as a developer and SRE to build tools, automation, and systems to improve system reliability and operational efficiency. Incident Management: Respond to and resolve critical system issues promptly, including being available for on-call support and handling emergencies during non-business hours, including late nights. Infrastructure Management: Design, deploy, and manage infrastructure solutions using containers (Docker/Kubernetes), networks, and CDNs to ensure scalability and performance. Observability: Drive best practices in observability, including metrics, logging, and tracing, to enhance system monitoring and proactive issue resolution. Implement and maintain observability tools like Prometheus, Grafana, ELK stack, or DataDog. Reliability and Performance: Proactively identify areas for improvement in system reliability, performance, and scalability, and define strategies and best practices to address them. Collaboration and Communication: Work closely with cross-functional teams, including development, QA, and support, to align goals and improve operational excellence. Communicate effectively across teams and stakeholders. CI/CD and Automation: Build and enhance CI/CD pipelines to improve deployment reliability and efficiency. Automate repetitive tasks and processes wherever possible. Continuous Improvement: Stay up to date with the latest technologies and best practices in DevOps, SRE, and cloud computing. Apply them to improve existing systems and processes. Required Skills and Experience Experience: 10+ years of experience in software development, DevOps, and SRE roles. Development Experience: Strong experience as a software developer with expertise in building scalable, distributed systems. SRE/DevOps Experience: Hands-on experience managing production systems, ensuring uptime, and improving system reliability. Technical Proficiency: Strong experience with containers (Docker, Kubernetes). In-depth understanding of networking concepts and CDNs (e.g., Akamai, Cloudfront). Proficiency in infrastructure-as-code (IaC) tools like Terraform or CloudFormation. • Expertise in cloud platforms such as AWS, GCP, or Azure. Observability Expertise: Proven experience in implementing and maintaining robust observability solutions, including monitoring, alerting, metrics, and tracing. Incident Handling: Proven ability to handle critical incidents, perform root cause analysis, and implement permanent fixes. Automation: Strong scripting/programming skills in Python, Go, or similar languages. Reliability Focus: Demonstrated passion for system reliability, scalability, and performance optimization. Soft Skills: Excellent communication, collaboration, and leadership skills. Ability to explain technical details to non-technical stakeholders. On-Call Readiness: Willingness to participate in a 24x7 on-call rotation and support critical systems during off-hours. Preferred Qualifications Experience in OTT or video streaming platforms. • Understanding of video delivery workflows, encoding, and adaptive bitrate streaming technologies. Experience working with hybrid infrastructure or multicloud cloud environment (on-premise and multi cloud). Certifications in cloud platforms (AWS Certified Solutions Architect, Google Professional Cloud Architect, etc.). Why join us? Sony Pictures Networks is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, Sony TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally-led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy and are recognised as a Great Place to Work. - Great Place to Work Institute- Ranked as one of the Great Places to Work for since 5 years - Included in the Hall of Fame as a part of the Working Mother & Avtar Best Companies for Women in India study- Ranked amongst 100 Best Companies for Women In India - ET Human Capital Awards 2021- Winner across multiple categories - Brandon Hall Group HCM Excellence Award - Outstanding Learning Practices. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Kochi, Kerala, India

On-site

5-8 years of experience in eCommerce and/or OMS domain. Should have good end to end knowledge of various Commerce subsystems, which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. Proficiency in minimum two areas is mandatory apart from eCommerce. Extensive backend development knowledge with core Java/J2EE and Microservice based event driven architecture with a cloud based architecture (preferably AWS) . Should be cognizant of key integrations undertaken in eCommerce and associated downstream subsystems which should include but not limited to different Search frameworks, Payment gateways, Product Lifecycle Management Systems, Loyalty platforms, Recommendation engines, Promotion frameworks etc. Good knowledge of integrations with downstream eCommerce systems like OMS, Store Systems, ERP etc. (Any additional knowledge on OMS and Store domains will be of additional advantage). Experience in Service Oriented Architecture - Developing/securely exposing/consuming Web Services – RESTful and integrating headless applications. Should be able to understand system end to end, maintain application and troubleshoot issues. Good understanding of Data Structures and Entity models. Should have understanding of building, deploying and maintaining server based as well as serverless applications on cloud, preferably AWS. Expertise in integrating synchronously and asynchronously with third party web services. Good to have concrete knowledge of AWS Lambda functions, API Gateway, AWS. CloudWatch, SQS, SNS, Event bridge, Kinesis, Secret Manager, S3 storage, server architectural models etc. . Knowledge of any major eCommerce / OMS platforms will have added advantage. Must have a working knowledge of Production Application Support. Good knowledge of Agile methodology, CI/CD pipelines, code repo and branching strategies preferably with GitHub or Bitbucket. Good knowledge of observability tools like NewRelic, DataDog, Graphana, Splunk etc. with knowledge of configuring new reports, proactive alert settings, monitoring KPIs etc. Should have a fairly good understanding of L3 support processes, roles and responsibilities. Should work closely with counterparts in L1/L2 teams to monitor, analyze and expedite issue resolutions, reduce stereoptypes, automate SoPs or find avenues for the same by being proactive. Should be flexible to work on overlap timings with some part of onsite (PST) hours to hand off / transition work for the day to the onsite counterpart for L3.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

5-8 years of experience in eCommerce and/or OMS domain. Should have good end to end knowledge of various Commerce subsystems, which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. Proficiency in minimum two areas is mandatory apart from eCommerce. Extensive backend development knowledge with core Java/J2EE and Microservice based event driven architecture with a cloud based architecture (preferably AWS) . Should be cognizant of key integrations undertaken in eCommerce and associated downstream subsystems which should include but not limited to different Search frameworks, Payment gateways, Product Lifecycle Management Systems, Loyalty platforms, Recommendation engines, Promotion frameworks etc. Good knowledge of integrations with downstream eCommerce systems like OMS, Store Systems, ERP etc. (Any additional knowledge on OMS and Store domains will be of additional advantage). Experience in Service Oriented Architecture - Developing/securely exposing/consuming Web Services – RESTful and integrating headless applications. Should be able to understand system end to end, maintain application and troubleshoot issues. Good understanding of Data Structures and Entity models. Should have understanding of building, deploying and maintaining server based as well as serverless applications on cloud, preferably AWS. Expertise in integrating synchronously and asynchronously with third party web services. Good to have concrete knowledge of AWS Lambda functions, API Gateway, AWS. CloudWatch, SQS, SNS, Event bridge, Kinesis, Secret Manager, S3 storage, server architectural models etc. . Knowledge of any major eCommerce / OMS platforms will have added advantage. Must have a working knowledge of Production Application Support. Good knowledge of Agile methodology, CI/CD pipelines, code repo and branching strategies preferably with GitHub or Bitbucket. Good knowledge of observability tools like NewRelic, DataDog, Graphana, Splunk etc. with knowledge of configuring new reports, proactive alert settings, monitoring KPIs etc. Should have a fairly good understanding of L3 support processes, roles and responsibilities. Should work closely with counterparts in L1/L2 teams to monitor, analyze and expedite issue resolutions, reduce stereoptypes, automate SoPs or find avenues for the same by being proactive. Should be flexible to work on overlap timings with some part of onsite (PST) hours to hand off / transition work for the day to the onsite counterpart for L3.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

4.0 years

18 - 30 Lacs

India

Remote

Job Title: Senior Golang Backend Developer Company Type: IT Services Company Employment Type: Full-Time Location: Ahmedabad / Rajkot (Preferred) or 100% Remote (Open) Experience Required: 4+ Years (Minimum 3.5 years of hands-on experience with Golang) About The Role We are hiring a Senior Golang Backend Developer , a leading service-based tech company based in Ahmedabad . If you're a passionate backend engineer who thrives in building scalable APIs, working on microservices architecture, and deploying applications using serverless frameworks on AWS , this role is for you! This is a full-time opportunity and while we prefer candidates who can work from Ahmedabad or Rajkot , we're also open to 100% remote working for the right candidate. Key Responsibilities Design, build, and maintain RESTful APIs and backend services using Golang Develop scalable solutions using Microservices Architecture Optimize system performance, reliability, and maintainability Work with AWS Cloud Services (Lambda, SQS, SNS, S3, DynamoDB, etc.) and implement Serverless Architecture Ensure clean, maintainable code through best practices and code reviews Collaborate with cross-functional teams for smooth integration and architecture decisions Monitor, troubleshoot, and improve application performance using observability tools Implement CI/CD pipelines and participate in Agile development practices Required Skills & Experience 4+ years of total backend development experience 3.5+ years of strong, hands-on experience with Golang Proficient in designing and developing RESTful APIs Solid understanding and implementation experience of Microservices Architecture Proficient in AWS cloud services, especially: Lambda, SQS, SNS, S3, DynamoDB Experience with Serverless Architecture Familiarity with Docker, Kubernetes, GitHub Actions/GitLab CI Understanding of concurrent programming and performance optimization Experience with observability and monitoring tools (e.g., DataDog, Prometheus, New Relic, OpenTelemetry) Strong communication skills and ability to work in Agile teams Fluency in English communication is a must Nice to Have Experience with Domain-Driven Design (DDD) Familiarity with automated testing frameworks (TDD/BDD) Prior experience working in distributed remote teams Why You Should Apply Opportunity to work with modern tools and cloud-native technologies Flexibility to work remotely or from Ahmedabad/Rajkot Supportive, collaborative, and inclusive team culture Competitive salary with opportunities for growth and upskilling Skills: amazon sqs,cloud development,restful apis,observability tools,cloud services,gitlab ci,github actions,golang,agile development,behavior-driven development (bdd),s3,aws lambda,microservices,docker,go (golang),aws,microservices architecture,serverless architecture,amazon web services (aws),kubernetes,domain-driven design (ddd),backend development,cloud

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a Senior Software Engineer to join the global team. You have an uncanny knack for problem solving and you have a sharp product mindset. Our engineers are involved in all aspects of the software design process and create high-performing, scalable, and secure products. Working alongside Product, Design, Quality, Platform, Security and your team to design right-size solutions to build our Products. With deep knowledge of your domain and programming language, you not only understand how choices made in software design and its architecture affect areas like performance, memory management, maintainability and testability - you promote this knowledge to others within your team. You are hands-on and keep up to date with the latest technologies. Role Specifics Work Location: Hyderabad, Telengana, IN Reporting Structure: Reports to Director, Engineering What You Will Own Collaborate with engineers, product managers, designers and other stakeholders to meet customers’ needs Contribute to code reviews and automated testing Participate in daily stand-up and regular refinement and planning meetings as part of a team-based, agile/scrum environment Work autonomously and pair up with team members when appropriate About you You have an uncanny knack for problem solving and you have a sharp product mindset. Our engineers are involved in all aspects of the software design process and create high-performing, scalable, and secure products. Working alongside Product, Design, Quality, Platform, Security and your team to design right-size solutions to build our Products. With deep knowledge of your domain and programming language, you not only understand how choices made in software design and its architecture affect areas like performance, memory management, maintainability and testability - you promote this knowledge to others within your team. You are hands-on and keep up to date with the latest technologies. What You'll Be Doing Contribute to the ongoing development and maintenance of our PHP platform. Actively participate in the strategic migration to Node.js, TypeScript, and React, designing and developing robust, scalable systems using modern practices. Write clean, maintainable code following SOLID principles and appropriate design patterns Implement comprehensive testing strategies and participate in thorough code reviews Debug and triage production issues with system integrations Optimize code and infrastructure for performance, scalability, and security Collaborate with team members in an agile/scrum environment to deliver high-quality solutions Required Qualifications 3+ years of commercial experience building production-grade software applications Strong proficiency in PHP (Laravel/Symfony), Node.js, TypeScript, and React. Demonstrated mastery of SOLID principles, design patterns, and clean code practices Expert-level debugging and triaging skills across complex distributed systems Experience implementing and maintaining comprehensive test suites (unit, integration, E2E) Proven ability to optimize applications for performance, scalability, and security Experience with database design and optimization (MySQL, MongoDB) Proficiency with version control workflows preferably git and collaborative development processes Excellent problem-solving skills with analytical and systematic approach Preferred Qualifications Experience with NestJS and NextJS frameworks. Knowledge of containerization (Docker) and Kubernetes orchestration Familiarity with AWS or Azure cloud services and infrastructure Experience implementing CI/CD pipelines and DevOps automation Understanding of observability tools (Datadog APM, logging, monitoring) Experience integrating with third-party APIs and enterprise systems Familiarity with infrastructure as code (Terraform, CloudFormation) Experience with system performance profiling and optimization techniques Background in implementing scalable microservices architectures Even if you don't have all of the Preferred Qualifications listed above, but feel you have what it takes to succeed in the role, we would love to hear from you!

Posted 2 weeks ago

Apply

3.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! Job Title: Sr Software Engineer Business Unit: WabtecOne, CTO Company: Wabtec Corporation Location: Bangalore, Karnataka, India Job Overview: The Software Engineer will be responsible for developing technological solutions in the WabtecOne space. As part of this role, the Software Engineer would be working closely with the Functional and Technical leads across various applications, COE, and Infrastructure teams to deliver the right business outcomes. Basic Qualifications: Master's/Bachelor of Engineering in Computer Science or Information Science or Electronics or Electrical & Electronics with 6 to 9 years of relevant software industry experience in ASP.Net, Angular, Design Patterns & Microservices. Minimum 3 years of working experience on cloud-based applications Essential Job Functions/Responsibilities: Design, develop, test, deploy, and maintain cloud-based software solutions. Continuously improve software using Scrum and Agile development methodologies, including daily standups and iteration planning meetings. Skilled in analyzing complex problems, documenting problem statements, and estimating efforts. Able to take ownership of small- and medium-sized tasks, delivering results while mentoring and supporting team members. Capable of evaluating the implications of technology decisions. Proficient in writing high-quality code that meets standards and delivers the intended functionality using the selected technology. Ensure code quality through best practices, unit testing, and code quality automation. Demonstrates initiative in exploring alternative technologies and approaches to solving problems. Advocate for transparency by proactively sharing design choices with relevant audiences, ensuring appropriate detail and timeliness. Exhibits expert understanding of functional and nonfunctional requirements and their prioritization within the backlog. Work independently while closely collaborating with architects and technical leaders. Possess strong organizational skills to support the business in developing global products that span various technologies and deployment models. Adaptable to various technologies, working with teams to deliver cutting-edge solutions. Technical Expertise: Preference will be given to candidates with 5 or more years of experience developing web-based solutions that involve programming languages and web application platforms Angular with ASP.NET Core Preference will be given to candidates with experience developing applications with both relational databases and NoSQL databases such as PostgreSQL, Elasticsearch, and Cassandra. Preference will be given to candidates with experience developing and supporting applications in an Amazon Web Services (AWS) cloud environment. Preference will be given to candidates with experience of docker, Kubernetes and monitoring tools such as DataDog to handle deployment and operation of applications. Preference will be given to candidates with experience developing authentication solutions using OAuth 2.0 3+ years building responsive web applications for desktop and mobile devices. 3+ years with HTML, CSS, TypeScript, responsive design and JavaScript 3+ years integrating web applications with back-end services 3+ years working with Git or similar source code control Desired Skills: Passionate about software development and relevant technologies Innovative problem solver that exhibits a positive, energetic attitude Effective oral and written communication skills; ability to articulate clearly and concisely Fluent in written and spoken English Self-motivated with proven ability to work independently Team player that works well with others Works efficiently and productively with global teams and team members Results-oriented with a clear focus on quality High degree of attention to detail Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! http://www.WabtecCorp.com Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Data Engineer We are looking for an experienced Data Engineer with strong architectural-level experience in Azure-based solutions. The candidate should have a solid understanding of API management, Azure serverless components, and monitoring tools like Datadog. The ideal engineer will work on remote assignments, supporting enterprise data flow, transformation, and integration tasks. Key Responsibilities Design and develop scalable data integration pipelines using Azure services such as Logic Apps and Azure Functions. Define and implement API specifications using Swagger or similar tools. Architect and optimize API Management (APIM) solutions in Azure. Collaborate with cross-functional teams to ensure seamless integration of data services. Work on infrastructure monitoring and performance tuning using Datadog. Maintain security configurations, including NSGs (Network Security Groups), ensuring secure data transmission. Provide documentation and technical specifications for all implementations. Participate in code reviews, testing, and deployment activities for data engineering solutions. Skills Required Cloud Platform Microsoft Azure Integration Azure Logic Apps, Azure Functions API Management Azure API Management (APIM), API Swagger / OpenAPI Monitoring Datadog (Architect-level expertise required) Security Network Security Groups (NSG) Documentation API Specifications, Design Doc (ref:hirist.tech)

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies