Jobs
Interviews

2895 Datadog Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

5 - 8 Lacs

Thiruvananthapuram

On-site

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What you will do With limited oversight, use your experience and knowledge of testing and testability to influence better software design, promote proper engineering practice, bug prevention strategies, testability, accessibility, privacy, and other advanced quality concepts across solutions Develop test strategies, automate tests using test frameworks and write moderately complex code/scripts to test solutions, products and systems Monitor product development and usage at all levels with an eye for product quality. Create test harnesses and infrastructure as necessary Demonstrate an understanding of test methodologies, writing test plans, creating test cases and debugging What experience you need Bachelor's degree in a STEM major or equivalent experience 2-5 years of software testing experience Able to create automated test based on functional and non-functional requirements Self-starter that identifies/responds to priority shifts with minimal supervision. Software build management tools like Maven or Gradle Software testing tools like Cucumber, Selenium Software testing, performance, and quality engineering techniques and strategies Testing technologies: JIRA, Confluence, Office products Cloud technology: GCP, AWS, or Azure Cloud Certification Strongly Preferred What could set you apart Experience with cloud based testing environments(AWS,GCP) Hands-on experience working in Agile environments. Knowledge of API testing tools(Bruno,Swagger) and on SOAP API Testing using SoapUI. Certification in ISTQB or similar or Google cloud certification.. Experience with cutting-edge tools & technologies :Familiarity with the latest tools and technologies such as AI, machine learning and cloud computing. Expertise with cross device testing strategies and automation via device clouds Experience monitoring and developing resources Excellent coding and analytical skills Experience with performance engineering and profiling (e.g. Java JVM, Databases) and tools such as Load Runner, JMeter,Gatling Exposure to Application performance monitoring tools like Grafana & Datadog Ability to create good acceptance and integration test automation scripts and integrate with Continuous integration (Jenkins) and code coverage tools (Sonar) to ensure 80% or higher code coverage Experience working in a TDD/BDD environment and can utilize technologies such as JUnit, Rest Assured, Appium, Gauge/Cucumber frameworks, APIs (REST/SOAP). Understanding of Continuous Delivery concepts and can use tools including Jenkins and vulnerability tools such as Sonar,Fortify, etc. Experience in Lamba Testing for Cross browser testing A good understanding of Git version control,including branching strategies , merging and conflict resolution. Be viewed as a lead across the team, engaging and energizing teams to achieve aggressive goals. Ensure enforcement of testing policies, standards and guidelines to drive a consistent testing framework across the business. Demonstrate an understanding of test methodologies, writing test plans/test strategies, creating test cases ,defect reporting and debugging. Define test cases and create scripts based on assessment and understanding of product specifications and test plan. Automate defined test cases and test suites per project and plan Develop test automation using automation frameworks Conduct rigorous testing to validate product functionality per the test plan and record testing results and defects in Test management tool,JIRA. Create defects as a result of test execution with correct severity and priority; Responsible for conducting Functional ,Non-Functional Testing,analyzing performance metrics and identifying bottlenecks to optimize system performance. Collaborate with peers, Product Owners and Test Lead to understand product functionality and specifications to create effective test cases and test automation Collaborate with development teams to integrate automated tests into CI/CD pipeline. Participate in security testing activities to identify and mitigate vulnerabilities. Maintain thorough and accurate quality reports/metrics and dashboards to ensure visibility of product quality, builds and environments. Ensure communications are thorough and accurate for all work documentation including status updates. Review all requirements/acceptance criteria to assure completeness and coverage Actively involve in root cause analysis and problem -solving activities to prevent defects and improve product quality.Propose and implement process improvements to enhance the overall quality assurance process.Work with team leads to track and determine prioritization of defect fixes. BS or MS degree in Computer Science or Business or equivalent job experience required 4+ years of software testing and automation experience Expertise and skilled in programming languages like core-Java ,python or Javascript. Able to create automated test based on functional and nonfunctional requirements Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Understanding of SQL and experience working with databases like MYSQL,POSTgreSQL, or Oracle. Good understanding of software development methodologies(preferably Agile) & testing methodologies. Proficiency in working with Test Automation Frameworks created for WEB & API Automation using Selenium,Appium,TestNG,Rest Assured,Karate,Gauge,Cucumber,Bruno Experience with performance testing tools -Jmeter , Gatling Knowledge of security testing concepts . Strong analytical and problem solving skills. Excellent written and verbal communication skills. Ability to lead and motivate teams. Self-starter that identifies/responds to priority shifts with minimal supervision. Software build management tools like Maven or Gradle Testing technologies: JIRA, Confluence, Office products Knowledge in Test Management tool : Zephyr

Posted 9 hours ago

Apply

5.0 - 12.0 years

4 - 8 Lacs

Chennai

On-site

Position Type : Full time Type Of Hire : Experienced (relevant combo of work and education) Education Desired : Bachelor of Computer Engineering Travel Percentage : 0% Site Reliability Engineer Are you curious, motivated, and forward-thinking? At FIS you’ll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. Work Location: Chennai - Ambattur, Hybrid (Two days in-office, Three days virtual) What you will be doing: Site Reliability Engineer will play a critical role in driving innovation and growth for the Banking Solutions, Payments and Capital Markets business. In this role, the candidate will have the opportunity to make a lasting impact on the company's transformation journey, drive customer-centric innovation and automation, and position the organization as a leader in the competitive banking, payments and investment landscape. Specifically, the Site Reliability Engineer will be responsible for the following: Design and maintain monitoring solutions for infrastructure, application performance, and user experience. Implement automation tools to streamline tasks, scale infrastructure, and ensure seamless deployments. Ensure application reliability, availability, and performance, minimizing downtime and optimizing response times. Lead incident response, including identification, triage, resolution, and post-incident analysis. Conduct capacity planning, performance tuning, and resource optimization. Collaborate with security teams to implement best practices and ensure compliance. Manage deployment pipelines and configuration management for consistent and reliable app deployments. Develop and test disaster recovery plans and backup strategies. Collaborate with development, QA, DevOps, and product teams to align on reliability goals and incident response processes. Participate in on-call rotations and provide 24/7 support for critical incidents. What you bring: 5 to 12 years of Proficiency in development technologies, architectures, and platforms (web, API). Experience with cloud platforms (AWS, Azure, Google Cloud) and IaC tools. Knowledge of monitoring tools (Prometheus, Grafana, DataDog) and logging frameworks (Splunk, ELK Stack). Experience in incident management and post-mortem reviews. Strong troubleshooting skills for complex technical issues. Proficiency in scripting languages (Python, Bash) and automation tools (Terraform, Ansible). Experience with CI/CD pipelines (Jenkins, GitLab CI/CD, Azure DevOps). Ownership approach to engineering and product outcomes. Excellent interpersonal communication, negotiation, and influencing skills. What we offer you: A work environment built on collaboration, flexibility and respect Competitive salary and attractive range of benefits designed to help support your lifestyle and wellbeing. Varied and challenging work to help you grow your technical skillset Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 9 hours ago

Apply

5.0 years

5 - 7 Lacs

Noida

On-site

We are looking for candidates with 5+ years experience in the IT Industry and strong AWS skills. Additionally, candidates should have solid knowledge of: SQL Splunk APM tools (AppDynamics and Datadog preferred) DevOps tools Primarily looking for a Cloud Support team, with a preference for candidates who have developer-level skills. The team will start with L1 support, gradually moving to L2, and eventually taking on L3 work in the long term. Key Responsibilities: Build and maintain CI/CD pipelines to ensure fast, safe, and reliable code deployment. AWS infrastructure administration and management (VPC, EC2, S3, ELB, EBS, Route53, ASM etc). Kubernetes Cluster Management including creating new kops clusters & building / deploying Secrets,configs and Docker based Containerized MicroServices etc. Monitoring & Alerting on the Infrastructure resources and application availability using APM & other monitoring tools Monitor system performance and availability, ensuring reliability, scalability, and security. Automate routine tasks and optimize processes to improve efficiency. Manage containers and orchestration tools (e.g., Docker, Kubernetes). Troubleshoot and resolve issues in development, test, and production environments. Ensure security best practices across the infrastructure and during application deployment.

Posted 9 hours ago

Apply

6.0 years

0 Lacs

India

Remote

Spreetail propels brands to increase their ecommerce market share across the globe while improving their operational costs. Learn how we are building one of the fastest-growing ecommerce companies in history: www.spreetail.com . Our Software & Technology teams build scalable, reliable, and cutting-edge software to propel Spreetail to being a top ecommerce company. We are seeking motivated individuals who are passionate about learning new technologies and building software to build a monster ecommerce company. If you are looking for an environment that provides creative freedom, work-life balance, and meaningful relationships, keep scrolling down. This position is remote in India This position will be remote in the country of India and will work Monday-Friday during India hours. How You Will Achieve Success Backend Development: Expert-level Python development, with a deep understanding of building high-performance applications, preferably with frameworks like FastAPI. Modern Data Engineering Stack: Advanced, hands-on experience with DBT, Snowflake, S3, and AWS Glue. You know how to build, test, and optimize complex data transformations. Distributed Systems & Infrastructure: Deep knowledge of Kafka, Redis, and PostgreSQL. Your experience is not just in using them, but in optimizing and scaling them to handle millions of records daily. Performance Optimization: Proven experience scaling systems, diagnosing bottlenecks, and implementing solutions (e.g., caching, query optimization, consumer scaling) for high-volume data environments. API Design & Data-Intensive Applications: A strong background in building robust, high-throughput APIs designed for consuming large, processed datasets. Technical Leadership: Experience mentoring other engineers on projects and leading technical efforts across teams. What Experiences Will Help You In This Role Bring 5–6 years of engineering experience, including developing and operating large-scale web scraping systems using tools like Selenium, Playwright, Beautiful Soup, or Puppeteer for data extraction at scale. Build and maintain high-performance backend services using Python/FastAPI, integrating with Kafka or RabbitMQ, and exposing clean, reliable APIs. Leverage deep experience with PostgreSQL, including query optimization and index tuning, to support high-volume read/write operations. Design and optimize scalable data pipelines using DBT and AWS Glue (or similar frameworks) for large-scale data ingestion, transformation, and storage across cloud infrastructure. Ensure end-to-end observability and system health through proactive monitoring, alerting, and debugging using tools like Datadog and centralized logging. Lead architectural reviews and mentor peers, collaborating cross-functionally with analysts, PMs, and DevOps to deliver reliable, scalable platforms. This is a remote position and requires candidates to have an available work-from-home setup Desktop/Laptop System Requirements 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable or fiber wired internet service with 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution. Please be aware of scammers. Spreetail will only contact you through Lever or the spreetail.com domain. Spreetail will never ask candidates for money during the recruitment process. Please reach out to careers@spreetail.com directly if you have any concerns. Emails from @spreetailjobs.com are fraudulent.

Posted 9 hours ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You Will Do Support large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Provide documentation and automation capabilities for Disaster Recovery as part of application deployment. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Knowledge of the configuration of monitoring solutions and the creation of dashboards (DPA, DataDog, Big Panda, Prometheus, Grafana, Log Analytics, Chao Search) What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in database administration, system administration , performance tuning and automation. 1+ years of experience developing and/or administering software in public cloud Experience in managing Traditional databases like SQLServer/Oracle/Postgres/MySQL and providing 24*7 Support. Experience in implementing and managing Infrastructure as Code (e.g. Terraform, Python, Chef) and source code repository (GitHub). Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Java, Python, Scala, SQL etc. Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Automation - Uses knowledge of best practices in coding to build pipelines for build, test and deployment of processes/components; Understand technology trends and use knowledge to identify factors that can be used to automate system/process deployments Data / Database Management - Uses knowledge of Database operations and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services; Applies industry best standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes own work; Monitors and measures systems against key metrics to ensure availability of systems; Identifies new ways of working to make processes run smoother and faster Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action; Demonstrates strong written and verbal communication skills Troubleshooting - Applies a methodical approach to routine issue definition and resolution; Monitors actions to investigate and resolve problems in systems, processes and services; Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures; Analyzes patterns and trends

Posted 11 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At Franklin Templeton, we’re driving our industry forward by developing new and innovative ways to help our clients achieve their investment goals. Our dynamic and diversified firm spans asset management, wealth management, and fintech, offering many ways to help investors make progress toward their goals. Our talented teams working around the globe bring expertise that’s both broad and unique. From our welcoming, inclusive, and flexible culture to our global and diverse business, we offer opportunities not only to help you reach your potential but also to contribute to our clients’ achievements. Come join us in delivering better outcomes for our clients around the world! What is the Senior Software Engineer responsible for? The FTT AI & Digital Transformation group is a newly established team within Franklin Templeton Technologies, the Technology function within Franklin Templeton Investments. The core mandate of this role is to bring innovative digital investment products and solutions to market leveraging a patented and innovative digital wealth tech/fintech product - Goals Optimization Engine (GOE) - built with several years of academic research in mathematical optimization, probability theory and AI techniques at its core. The mandate also extends to leveraging cutting edge AI such as Generative AI in addition to Reactive AI to create value within various business functions within Franklin Templeton such as Investment Solutions, Portfolio Management, Sales & Distribution, Marketing, HR functions among others in a responsible and appropriate manner. The possibilities are limitless here and this would be a fantastic opportunity for self-motivated and driven professionals to make significant contributions to the organization and to themselves. What are the ongoing responsibilities of Senior Software Engineer? Senior Software Engineer provides expertise and experience in application development and production support activities to support business needs: Architect, build, and optimize back-end systems, APIs, and databases to support seamless front-end interactions. Write clean, efficient, and maintainable code with strong documentation across the stack. Integrate AI-assisted development workflows using tools like GitHub Copilot to accelerate delivery.Collaborate closely with product managers, designers, and other developers to deliver high-quality features. Engage in user acceptance testing (UAT) and support test execution with analysts and stakeholders. Build and deploy back-end services in Python using frameworks like Django or Flask. Ensure application security, performance, and scalability through robust testing and peer code reviews. Build for scalability, observability, and resilience in a multi-tenant, white-label setup. Debug and troubleshoot issues across the entire stack, from the database to the front-end. Participate in sprint planning, backlog grooming, and release planning to deliver high-quality features on time. Stay current with industry trends, tools, and best practices to continuously improve development processes. Conduct peer code reviews, static code analysis, and performance tuning to maintain high development standards. Adaptable to ambiguity and rapidly evolving conditions, viewing changes as opportunities to introduce structure and order when appropriate Reviews source code and design of peers incorporating advanced business domain knowledge. Offers vocal involvement in design and implementation discussions. Provides alternate views on software and product design characteristics to strengthen final decisions. Participates in defining the technology roadmap. What ideal qualifications, skills & experience would help someone to be successful? Education And Experience At least 8+ years of experience in software development. A bachelor's degree in computer science, Engineering, or related fields. Candidates from Tier 1 or Tier 2 institutions in India (e.g., IITs, BITS Pilani, IIITs, NITs, etc.) are strongly preferred. Strong understanding of RESTful API design and development Extensive experience building back-end services using Python (Django, Flask). Familiarity with message brokers and event-driven architecture (e.g., Kafka) Familiarity with Node.js and other back-end frameworks as a bonus. Familiarity with Karpenter for dynamic Kubernetes cluster autoscaling and optimizing compute resource utilization Familiarity with Datadog or Kibana for application monitoring, alerting, and observability dashboard for diagnosing performance bottlenecks using telemetry data Experience working with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes). Experience with integrating observability tools into CI/CD pipelines and production environment Proficiency in databases, both relational (PostgreSQL, MySQL) and NoSQL (MongoDB). Proficiency in writing unit test cases Strong understanding of API development, authentication, and security protocols such as OAuth and JWT. Hands-on experience with DevOps practices and CI/CD pipelines. Strong proficiency in using AI tools such as GitHub Copilot. Excellent analytical and problem-solving skills with a proactive, solution-oriented mindset. Strong communication and collaboration abilities in team environments. A passion for building user-centric, reliable, and scalable applications. Bonus: Experience with CMS-integrated backends or regulated industries (finance, healthcare, etc.) Job Level - Individual Contributor Work Shift Timings - 2:00 PM - 11:00 PM IST Experience our welcoming culture and reach your professional and personal potential! Our culture is shaped by our diverse global workforce and strongly held core values. Regardless of your interests, lifestyle, or background, there’s a place for you at Franklin Templeton. We provide employees with the tools, resources, and learning opportunities to help them excel in their career and personal life. Hear more from our employees By joining us, you will become part of a culture that focuses on employee well-being and provides multidimensional support for a positive and healthy lifestyle. We understand that benefits are at the core of employee well-being and may vary depending on individual needs. Whether you need support for maintaining your physical and mental health, saving for life’s adventures, taking care of your family members, or making a positive impact in your community, we aim to have them covered. Highlights Of Our Benefits Include Professional development growth opportunities through in-house classes and over 150 Web-based training courses An educational assistance program to financially help employees seeking continuing education Medical, Life and Personal Accident Insurance benefit for employees. Medical insurance also cover employee’s dependents (spouses, children and dependent parents) Life insurance for protection of employees’ families Personal accident insurance for protection of employees and their families Personal loan assistance Employee Stock Investment Plan (ESIP) 12 weeks Paternity leave Onsite fitness center, recreation center, and cafeteria Transport facility Child day care facility for women employees Cricket grounds and gymnasium Library Health Center with doctor availability HDFC ATM on the campus Learn more about the wide range of benefits we offer at Franklin Templeton Franklin Templeton is an Equal Opportunity Employer. We are committed to providing equal employment opportunities to all applicants and existing employees, and we evaluate qualified applicants without regard to ancestry, age, color, disability, genetic information, gender, gender identity, or gender expression, marital status, medical condition, military or veteran status, national origin, race, religion, sex, sexual orientation, and any other basis protected by federal, state, or local law, ordinance, or regulation. Franklin Templeton is committed to fostering a diverse and inclusive environment. If you believe that you need an accommodation or adjustment to search for or apply for one of our positions, please send an email to accommodations@franklintempleton.com. In your email, please include the accommodation or adjustment you are requesting, the job title, and the job number you are applying for. It may take up to three business days to receive a response to your request. Please note that only accommodation requests will receive a response.

Posted 12 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company: Continental Location: Bangalore Expenience: 3-6 yrs Job Description Overview of the role: We are looking for Backend Developer who is extremely passionate about backend software development for our wide range of cloud-based web / mobile B2B & B2C products targeted towards a wide range of industries and applications. You will be based in our new ground-breaking innovation hub with an atmosphere of start-up and get an opportunity to work and collaborate with internal and external technology specialist in areas of AI, IOT, VR/AR. Do you want to be a part of this exciting journey and make a difference? Key Responsibilities: Design, develop, and maintain scalable and reliable backend systems and APIs using modern technologies and best practices. Own a module and work closely with TL Identify, prioritize and execute tasks in the software development life cycle Write clean, efficient, and maintainable code following best practices and coding standards Optimize backend systems for performance, scalability, and reliability. Implement security best practices to protect sensitive data and ensure compliance with security standards. Troubleshoot and debug issues and provide timely resolutions to ensure smooth operation of backend systems. Work closely with DevOps and Infrastructure teams to deploy and monitor backend services . Qualifications Technical Skills Strong experience in Node.js as the backend technology. Experience in Java and Python is a plus Strong experience in working with Microservice Architecture Good experience in working with AWS cloud platform Good experience CI/CD tools and methodologies is a plus Experience in using various relevant tools for Unit Testing, Code quality etc Strong experience in building web front end using angular with relevant framework as the front-end technology is plus Other skills Experience of agile software development methodologies Excellent communication skills in English (spoken and written) Great team player and ability to work in a highly international team Willingness to sometimes travel nationally and internationally to various Continental R&D centers and external development partner locations Willingness to learn new things Experience Around 3 to 6 years of experience overall 3+ years experience in backend development 3+ years in AWS services such as ApiGateway, Lambda, Dynamodb, s3 etc 2+ years in CICD topics 1+ years in using tools like Sonarqube, Datadog, Appdynamics etc 3+ years in Agile delivery AWS certification is a plus Experience working in tools like JIRA, Confluence, GIT, Jenkins etc BE in engineering with focus on computer science / software engineering MCA with professional experience Other relevant education streams with strong tech experiences can be considered

Posted 13 hours ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Monitoring & Observability Engineer – Datadog Specialist Experience: 4+ Years Location: [Specify Location or Remote] Job Type: Full-Time Job Summary: We are looking for a talented Observability Engineer with hands-on experience in Datadog to enhance our infrastructure and application monitoring capabilities. The ideal candidate will have a strong understanding of performance monitoring, alerting, and observability in cloud-native environments. Key Responsibilities: Design, implement, and maintain observability solutions using Datadog for applications, infrastructure, and cloud services. Set up dashboards, monitors, and alerts to proactively detect and resolve system issues. Collaborate with DevOps, SRE, and application teams to define SLOs, SLIs, and KPIs for performance monitoring. Integrate Datadog with services such as AWS, Kubernetes, CI/CD pipelines, and logging tools. Conduct performance tuning and root cause analysis of production incidents. Automate observability processes using infrastructure-as-code and scripting (e.g., Terraform, Python). Stay up-to-date with the latest features and best practices in Datadog and observability space. Must-Have Skills: 4+ years of experience in monitoring/observability, with 2+ years hands-on experience in Datadog Strong experience with Datadog APM, infrastructure monitoring, custom metrics, and dashboards Familiarity with cloud platforms like AWS, GCP, or Azure Experience monitoring Kubernetes, containers, and microservices Good knowledge of log management, tracing, and alert tuning Proficient with scripting (Python, Shell) and IaC tools (Terraform preferred) Solid understanding of DevOps/SRE practices and incident management Nice-to-Have Skills: Datadog certifications (e.g., Datadog Certified Observability Engineer) Experience integrating Datadog with CI/CD tools, ticketing systems, and chatops Familiarity with other monitoring tools (e.g., Prometheus, Grafana, New Relic, Splunk) Knowledge of performance testing tools (e.g., JMeter, k6)

Posted 13 hours ago

Apply

0 years

0 Lacs

India

Remote

We're Hiring: AWS DevOps Engineer Intern Location:Remote Duration:6 months Salary : unpaid We're looking for a motivated DevOps Intern to join our cloud infrastructure team. You'll gain hands-on experience with AWS services , CI/CD pipelines , Docker , Terraform , and more—supporting real-world deployments and automation tasks. What You’ll Work On: Deploying & managing AWS infrastructure (EC2, S3, IAM, etc.) Building CI/CD pipelines (GitHub Actions, Jenkins, CodePipeline) Writing automation scripts & Infrastructure as Code (Terraform/CloudFormation) Containerization with Docker/Kubernetes Monitoring with CloudWatch, Prometheus, etc. Understanding of monitoring/logging tools (e.g., ELK, Datadog, CloudWatch) What We’re Looking For: Familiarity with AWS basics & Linux Understanding of Git and DevOps concepts Eagerness to learn cloud tools & best practices A great opportunity to learn, build, and grow with our experienced DevOps team. Interested? Apply now or reach out at career@priyaqubit.com

Posted 13 hours ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

About the Role: We seek a Engineering Manager to join SMC engineering organization. As a hands-on people leader, you will own the charter of many backend web applications that drive multi-million dollar revenue for Smc. Your design, architecture, and people management expertise will help us scale our technology that powers industry-defining mobile applications, catering to millions of trading lovers Globally. Our EMs and Sr EMs work directly with product managers and business leaders with minimal hierarchical overhead to understand key business goals, design the technology strategy, and take accountability for moving key business metrics. They are also responsible for driving technical innovations and agile methodologies without losing sight of the big picture. The ideal candidate will have consistent growth in software engineering roles in consumer Internet or SaaS companies, with increasing ownership in software delivery and people management year on year. Opportunities we offer: To develop products that will disrupt the Fintech market in India and internationally. To build, lead, and develop top technical talent in engineering. To learn scalable software development practices and technologies from proven technology experts. What We Look For: 10+ years of experience in software development and 4+ years in engineering management leading teams of 10 or more backend or full-stack engineers. 7+ years of experience developing consumer-facing or SaaS applications on Amazon Web Services, Microsoft Azure, or Google Cloud. Several years of previous experience as a Technical Lead, Staff, or Principal Engineer developing web services or web applications in NodeJs, Python, Go, React, Next or Java. Excellent knowledge of microservices architecture, distributed design patterns, and a proven track record of architecting highly scalable and fault-tolerant web applications catering to millions of end users. Sound understanding of SQL databases like MySQL or PostgreSQL and NoSQL databases like Cassandra and MongoDB. Extensive usage of message brokers, caching, and search technologies like Kafka/RabbitMQ, Redis/Memcached, or Elasticsearch. Experience running containerized workloads on Kubernetes or OpenShift. Strong understanding of computer science concepts, data structures, and algorithms. Excellent communication skills and a strong inclination towards people growth, team development, and a growth mindset required to build high-performance engineering teams. Bonus points for: Experience in working with Fintech/Start-up culture Sound knowledge of application security. Extensive experience using Observability, Telemetry, and Cloud Security tools like ELK stack, Datadog, Dynatrace, Prometheus, Snyk, etc

Posted 14 hours ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You will be responsible for designing, developing, and maintaining server-side applications, APIs, and services using Python. Your key tasks will include optimizing applications for performance, scalability, and reliability, writing clean and efficient code following coding standards and best practices, and guiding junior developers to ensure quality and knowledge sharing. You will also be expected to implement unit and integration tests to maintain code robustness, set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI, and work with DevOps to deploy applications on cloud platforms, preferably Google Cloud Platform (GCP). In addition to the above, you should have experience in designing and developing cloud-native applications using APIs, containers, and Kubernetes. You will use GCP services to build scalable, reliable, and efficient applications, follow security best practices, manage access control, and ensure compliance on GCP. Collaboration with DevOps, frontend developers, and product managers for smooth integration and deployment will be crucial. Furthermore, you will design and manage SQL and NoSQL databases such as PostgreSQL, MySQL, and MongoDB, optimizing database queries, handling migrations, and ensuring data security and integrity. Improving the architecture and infrastructure of the codebase, following best practices for application and data security, and monitoring application performance will also be part of your responsibilities. Your role will involve building responsive and dynamic user interfaces using JavaScript and the Angular framework to deliver a seamless user experience across devices. You will develop, maintain, and optimize reusable Angular components to promote consistency, enhance UI performance, and reduce development time, working closely with UX/UI designers to translate designs into high-quality code. In terms of key responsibilities, you will design, develop, and maintain backend applications, APIs, and services using Python, write clean and scalable code following industry standards and best practices, optimize application performance, and ensure high availability and scalability. You will also review code, mentor junior developers, implement unit and integration tests, and collaborate with DevOps to deploy applications on cloud platforms. Your primary skills should include a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, 5-7 years of experience as a Python developer focusing on Product development (BE+FE development), hands-on experience in Angular Js, and proven experience in designing and deploying scalable applications and microservices. Additionally, you should have expertise in Python (FastAPI, Flask/Django), API Development (RESTful Services), Google Cloud Platform (GCP), database management systems (PostgreSQL, MySQL, MongoDB), CI/CD pipelines (Jenkins, GitLab CI, CircleCI), Frontend Development (JavaScript, Angular), Git, Unit & Integration Testing, and security principles, authentication, and data protection. Experience with monitoring tools (Prometheus, Grafana, Datadog), security and compliance standards (GDPR, PCI, Soc2), DevOps collaboration, UX/UI collaboration for Angular components, asynchronous programming (asyncio, Aiohttp), big data technologies like Spark or Hadoop, and machine learning libraries (TensorFlow, PyTorch) would be considered as secondary skills.,

Posted 20 hours ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You will be joining StellarTech, an international rapidly growing product IT company, as a Technical Project Manager (devops) where your strong leadership skills will play a key role. Your responsibilities will include managing and prioritizing the backlog of DevOps tasks to ensure efficient execution aligned with company objectives. You will oversee DevOps project planning, tracking, and delivery using Agile methodologies such as Scrum and Kanban. Working closely with engineering leadership, you will define and track Service Level Agreements (SLAs) for the centralized DevOps function. Collaboration across cross-functional teams including Product, Platform, Data, and R&D will be essential for smooth communication and effective coordination. In the realm of DevOps operations and planning, you will coordinate the execution of the DevOps roadmap which encompasses infrastructure automation, CI/CD improvements, and cloud cost optimizations. You will establish incident management processes including on-call rotations and tooling setup, such as PagerDuty, to ensure efficient workflows. Improving the DevOps team's efficiency in managing infrastructure as code and facilitating the integration and adoption of DevOps tools and best practices across the organization will be part of your duties. Technical incident management will be a crucial aspect of your role where you will organize, implement, and manage the technical incident response process to minimize downtime and conduct effective root cause analysis. Owning incident response tooling setup and automation, along with defining and documenting post-mortem processes and best practices for incident handling, will be essential tasks. Monitoring and observability will also fall under your purview, where you will ensure the effective implementation of monitoring tools like Prometheus, Grafana, OpenTelemetry, Datadog, and Sentry or similar solutions. Driving standardization of monitoring metrics and logging across teams to enhance system reliability will be a key focus area. To excel in this role, you should have proven experience in Project Management using Agile/Scrum methodologies and tools like Jira and Confluence. Strong organizational skills with attention to detail, effective cross-functional communication abilities, stakeholder management, and high stress resilience are essential. Technical expertise in AWS services, DevOps tooling, monitoring, and observability tools is also required. Preferred qualifications include previous experience as a DevOps Engineer or System Administrator, hands-on experience with incident response frameworks and automation, knowledge of cloud security best practices, scripting abilities in Bash, Python, or Go, experience in Linux system administration, familiarity with databases, and DBA and SQL experience. Working with StellarTech will offer you impactful work shaping the company's future, an innovative environment encouraging experimentation, flexibility in a remote or hybrid role, health benefits, AI solutions, competitive salary, work-life balance with flexible paid time off, and a collaborative culture where you will work alongside driven professionals.,

Posted 22 hours ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced DevOps Engineer with over 8 years of experience, looking for an opportunity to join a team that values building and maintaining scalable AWS infrastructure, automating deployments, and ensuring high system reliability. Your key responsibilities will include designing and implementing scalable and reliable AWS infrastructure, developing and maintaining CI/CD pipelines using Jenkins or GitHub Actions, building, operating, and maintaining Kubernetes clusters (including Helm charts), using Terraform or CloudFormation for Infrastructure as Code, automating system tasks using Python or GoLang, collaborating with developers and SREs to ensure resilient and efficient systems, and monitoring and troubleshooting performance using Datadog, Prometheus, and Grafana. To excel in this role, you should possess hands-on expertise in Docker & Kubernetes at a production level, a strong knowledge of CI/CD tools such as Jenkins or GitHub Actions, proficiency in Terraform or CloudFormation, solid scripting skills in Python or GoLang, experience with observability tools like Datadog, Prometheus, and Grafana, strong problem-solving and collaboration skills, and a Bachelor's degree in Computer Science, IT, or a related field. Certifications in Kubernetes or Terraform would be a plus. If you are ready to take on this exciting opportunity, apply now by sending your resume to preeti.verma@qplusstaffing.com.,

Posted 22 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title: Cloud Engineer, AS Location: Pune, India Role Description A Google Cloud Platform (GCP) Engineer is responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud. Here’s a detailed role description in points: The Platform Engineering Team is responsible for building and maintaining the foundational infrastructure, tooling, and automation that enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. Design and manage scalable, secure, and cost-effective cloud infrastructure (GCP, AWS, Azure). Implement Infrastructure as Code (IaC) using Terraform Implement security best practices for IAM, networking, encryption, and secrets management. Ensure regulatory compliance (SOC 2, ISO 27001, PCI-DSS) by automating security checks. Manage API gateways, service meshes, and secure service-to-service communication.. Enable efficient workload orchestration using Kubernetes, serverless What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Cloud Infrastructure Management – Design, deploy, and manage scalable, secure, and cost-effective cloud environments on GCP. Automation & Scripting – Develop Infrastructure as Code (IaC) using Terraform, Deployment Manager, or other tools. Security & Compliance – Implement security best practices, IAM policies, and ensure compliance with organizational and regulatory standards. Networking & Connectivity – Configure and manage VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking. CI/CD & DevOps – Set up CI/CD pipelines using Cloud Build, Jenkins, GitHub Actions, or similar tools for automated deployments. Monitoring & Logging – Implement monitoring and alerting using Stackdriver (Cloud Operations), Prometheus, or third-party tools. Cost Optimization – Analyze and optimize cloud spending by leveraging committed use discounts, autoscaling, and right-sizing resources. Disaster Recovery & Backup – Design backup, high availability, and disaster recovery strategies using Cloud Storage, Snapshots, and multi-region deployments. Database Management – Deploy and manage GCP databases like Cloud SQL, BigQuery, Firestore, and Spanner. Containerization & Kubernetes – Deploy and manage containerized applications using GKE (Google Kubernetes Engine) and Cloud Run. Your Skills And Experience Strong experience with GCP services like Compute Engine, Cloud Storage, IAM, Networking, Kubernetes, and Serverless technologies. Proficiency in scripting (Python, Bash) and Infrastructure as Code (Terraform, CloudFormation). Knowledge of DevOps practices, CI/CD tools, and GitOps workflows. Understanding of security, IAM, networking, and compliance in cloud environments. Experience with monitoring tools like Stackdriver, Prometheus, or Datadog. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Position Type Full time Type Of Hire Experienced (relevant combo of work and education) Education Desired Bachelor of Computer Engineering Travel Percentage 0% Are you curious, motivated, and forward-thinking? At FIS you’ll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. Pune (Two days in-office, Three days virtual) What You Will Be Doing Site Reliability Engineer will play a critical role in driving innovation and growth for the Banking Solutions, Payments and Capital Markets business. In this role, the candidate will have the opportunity to make a lasting impact on the company's transformation journey, drive customer-centric innovation and automation, and position the organization as a leader in the competitive banking, payments and investment landscape. Specifically, the Site Reliability Engineer will be responsible for the following~ Design and maintain monitoring solutions for infrastructure, application performance, and user experience Implement automation tools to streamline tasks, scale infrastructure, and ensure seamless deployments Ensure application reliability, availability, and performance, minimizing downtime and optimizing response times Lead incident response, including identification, triage, resolution, and post-incident analysis Conduct capacity planning, performance tuning, and resource optimization Collaborate with security teams to implement best practices and ensure compliance Manage deployment pipelines and configuration management for consistent and reliable app deployments Develop and test disaster recovery plans and backup strategies Collaborate with development, QA, DevOps, and product teams to align on reliability goals and incident response processes Participate in on-call rotations and provide 24/7 support for critical incidents What You Bring Proficiency in development technologies, architectures, and platforms (web, API) Experience with cloud platforms (AWS, Azure, Google Cloud) and IaC tools Knowledge of monitoring tools (Prometheus, Grafana, DataDog) and logging frameworks (Splunk, ELK Stack) Experience in incident management and post-mortem reviews Strong troubleshooting skills for complex technical issues Proficiency in scripting languages (Python, Bash) and automation tools (Terraform, Ansible) Experience with CI/CD pipelines (Jenkins, GitLab CI/CD, Azure DevOps) Ownership approach to engineering and product outcomes Excellent interpersonal communication, negotiation, and influencing skills What We Offer You A work environment built on collaboration, flexibility and respect Competitive salary and attractive range of benefits designed to help support your lifestyle and wellbeing Varied and challenging work to help you grow your technical skillset Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us SentiLink provides innovative identity and risk solutions, empowering institutions and individuals to transact confidently with one another. By building the future of identity verification in the United States and reinventing the currently clunky, ineffective, and expensive process, we believe strongly that the future will be 10x better. We’ve had tremendous traction and are growing extremely quickly. Already our real-time APIs have helped verify hundreds of millions of identities, beginning with financial services. In 2021, we raised a $70M Series B round, led by Craft Ventures to rapidly scale our best in class products. We’ve earned coverage and awards from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, American Banker, LendIt, and have been named to the Forbes Fintech 50 list consecutively since 2023. Last but not least, we’ve even been a part of history -- we were the first company to go live with the eCBSV and testified before the United States House of Representatives. About The Role Are you passionate about creating world-class solutions that fuel product stability and continuously improve infrastructure operations? We’re looking for a driven Infrastructure Engineer to architect, implement, and maintain powerful observability systems that safeguard the performance and reliability of our most critical systems. In this role, you’ll take real ownership—collaborating with cross-functional teams to shape best-in-class observability standards, troubleshoot complex issues, and fine-tune monitoring tools to exceed SLA requirements. If you’re ready to design high-quality solutions, influence our technology roadmap, and make a lasting impact on our product’s success, we want to meet you! Responsibilities Improve alerting across SentiLink systems and services, developing high quality monitoring capabilities while actively reducing false positives. Troubleshoot, debug, and resolve infrastructure issues as they arise; participate in on-call rotations for production issues. Define and refine Service Level Indicators (SLI), Service Level Objectives (SLO), and Service Level Agreements (SLA) in collaboration with product and engineering teams. Develop monitoring and alerting configurations using IaC solutions such as Terraform. Build and maintain dashboards to provide visibility into system performance and reliability. Collaborate with engineering teams to improve root cause analysis processes and reduce Mean Time to Recovery (MTTR). Drive cost optimization for observability tools like Datadog, CloudWatch, and Sumo Logic. Perform capacity testing to determine a deep understanding of infrastructure performance under load. Develop alerting based on learnings. Oversee, develop, and operate Kubernetes and service mesh infrastructure, ensuring smooth performance and reliability Investigate operational alerts, identify root causes, and compile comprehensive root cause analysis reports. Pursue action items relentlessly until they are thoroughly completed Conduct in-depth examinations of database operational issues, actively developing and improving database architecture, schema, and configuration for enhanced performance and reliability Develop and maintain incident response runbooks and improve processes to minimize service downtime. Research and evaluate new observability tools and technologies to enhance system monitoring. Requirements 5+ years of experience in cloud infrastructure, DevOps, or systems engineering. Expertise in AWS and infrastructure-as-code development. Experience with CI/CD pipelines and automation tools. Experience managing observability platforms, building monitoring dashboards, and configuring high quality, actionable alerting Strong understanding of Linux systems and networking. Familiarity with container orchestration tools like Kubernetes or Docker. Excellent analytical and problem-solving skills. Experience operating enterprise-size databases. Postgres, Aurora, Redshift, and OpenSearch experience is a plus Experience with Python or Golang is a plus Perks Employer paid group health insurance for you and your dependents 401(k) plan with employer match (or equivalent for non US-based roles) Flexible paid time off Regular company-wide in-person events Home office stipend, and more! Corporate Values Follow Through Deep Understanding Whatever It Takes Do Something Smart

Posted 1 day ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Attri) What do you need for this opportunity? Must have skills required: Azure, Docker, TensorFlow, Python, Shell Scripting Attri is Looking for: About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industry’s first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Leveling Up Opportunities 🌱 Diverse team environment 🌍 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Requirements Description and Requirements We are looking for candidates with 5+ years experience in the IT Industry and strong AWS skills. Additionally, candidates should have solid knowledge of: SQL Splunk APM tools (AppDynamics and Datadog preferred) DevOps tools Primarily looking for a Cloud Support team, with a preference for candidates who have developer-level skills. The team will start with L1 support, gradually moving to L2, and eventually taking on L3 work in the long term. Key Responsibilities: Build and maintain CI/CD pipelines to ensure fast, safe, and reliable code deployment. AWS infrastructure administration and management (VPC, EC2, S3, ELB, EBS, Route53, ASM etc). Kubernetes Cluster Management including creating new kops clusters & building / deploying Secrets,configs and Docker based Containerized MicroServices etc. Monitoring & Alerting on the Infrastructure resources and application availability using APM & other monitoring tools Monitor system performance and availability, ensuring reliability, scalability, and security. Automate routine tasks and optimize processes to improve efficiency. Manage containers and orchestration tools (e.g., Docker, Kubernetes). Troubleshoot and resolve issues in development, test, and production environments. Ensure security best practices across the infrastructure and during application deployment. Additional Job Description Strong communication skills, both written and verbal, for interacting with customers and internal teams. Ability to work effectively under pressure and manage multiple tasks simultaneously. EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity.

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Overview CoinTracker makes cryptocurrency portfolio tracking and tax compliance simple. CoinTracker enables consumers and businesses to seamlessly track their cryptocurrency portfolio, investment performance, taxes, and more. We are a globally distributed team on a mission to enable everyone in the world to use crypto with peace of mind. Learn more about our mission, culture, and hiring process. Some things we’re proud of 🛠️ Building foundational tools in the cryptocurrency space 📄 Over 1M tax forms generated 💲 $80B+ in cryptocurrency is tracked on CoinTracker (~over 5% of the entire crypto market) 🤝 Partnered with Coinbase, H&R Block, Intuit TurboTax, MetaMask, OpenSea, Phantom, Solana, and Uniswap 🗺️ Founders: Jon previously built TextNow (200M downloads), Chandan was previously a product manager at Google & Google[x] 💼 $100M+ venture capital raised from Accel, General Catalyst, Y Combinator, Initialized Capital, Coinbase Ventures, Kraken Ventures, Intuit Ventures, 776 Ventures, Balaji Srinivasan, Claire Hughes Johnson, Gokul Rajaram, Serena Williams, Zach Perret 🌴 Awesome benefits Your mission Join our close-knit, early-stage distributed team, where we tackle exciting technical challenges and create transformative crypto products that give people peace of mind. What You Will Do You will be a part of the newly formed Integration Expansion Team that works closely with our Integration engineering team. You’ll own and deliver new Integrations (blockchains & exchanges) by using our existing Integrations platform system to scale our Integrations coverage and assist in troubleshooting critical Integrations related issues. Collaborate with engineering, customer experience & product teams within Cointracker. Participate in hiring efforts to help scale the Integration Expansion Team. What We Look For We are hiring experienced backend software engineers with 8+ years of non-internship experience to help build and scale our new Integrations Expansion team. 2+ years of experience as a tech lead or team lead. Have strong CS and system design fundamentals, write high-quality code, value software testing, and uphold best practices in engineering. Strong Python knowledge & working with third party API’s is preferred. Familiarity with AWS or GCP & cloud fundamentals is preferred. Drawn to an early-stage, high-growth startup environment. Ability to read blockchain data / understanding of web3 strongly preferred. Background in data engineering domain and closely working with customer support team is a plus. Able to work effectively in a remote setting and able to overlap with our core hours of 9 AM to 12 PM Pacific Timezone. Our engineering process includes Code reviews Continuous integration Multiple daily automated deployments to production Automated testing with >85% code coverage Our tech stack is Web: HTML, Typescript, React, React Native, Styled-Components Mobile: React Native, Expo, GraphQL Backend: Python, Flask, GraphQL, Postgres, BigTable, Redis, Python RQ Infrastructure: GCP, Temporal, Terraform, PostgreSQL, Docker, Pub/Sub, Datadog, PagerDuty You don’t need to know any or all of these, but be willing to learn!

Posted 1 day ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Senior DevOps Engineer Experience: 4 - 7 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Onsite (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Azure OR Docker, TensorFlow, Python OR Shell Scripting Attri (One of Uplers' Clients) is Looking for: Senior DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

4.0 years

20 - 30 Lacs

India

Remote

Job Title: Senior Golang Backend Developer Company Type: IT Services CompanyEmployment Type: Full-Time Location: Ahmedabad / Rajkot (Preferred) or 100% Remote (Open) Experience Required: 4+ Years (Minimum 3.5 years of hands-on experience with Golang) About The Role We are hiring a Senior Golang Backend Developer, a leading service-based tech company based in Ahmedabad. If you're a passionate backend engineer who thrives in building scalable APIs, working on microservices architecture, and deploying applications using serverless frameworks on AWS, this role is for you! This is a full-time opportunity and while we prefer candidates who can work from Ahmedabad or Rajkot, we're also open to 100% remote working for the right candidate. Key Responsibilities Design, build, and maintain RESTful APIs and backend services using Golang Develop scalable solutions using Microservices Architecture Optimize system performance, reliability, and maintainability Work with AWS Cloud Services (Lambda, SQS, SNS, S3, DynamoDB, etc.) and implement Serverless Architecture Ensure clean, maintainable code through best practices and code reviews Collaborate with cross-functional teams for smooth integration and architecture decisions Monitor, troubleshoot, and improve application performance using observability tools Implement CI/CD pipelines and participate in Agile development practices Required Skills & Experience 4+ years of total backend development experience 3.5+ years of strong, hands-on experience with Golang Proficient in designing and developing RESTful APIs Solid understanding and implementation experience of Microservices Architecture Proficient in AWS cloud services, especially: o Lambda, SQS, SNS, S3, DynamoDB Experience with Serverless Architecture Familiarity with Docker, Kubernetes, GitHub Actions/GitLab CI Understanding of concurrent programming and performance optimization Experience with observability and monitoring tools (e.g., DataDog, Prometheus, New Relic, OpenTelemetry) Strong communication skills and ability to work in Agile teams Fluency in English communication is a must Nice to Have Experience with Domain-Driven Design (DDD) Familiarity with automated testing frameworks (TDD/BDD) Prior experience working in distributed remote teams Why You Should Apply Opportunity to work with modern tools and cloud-native technologies Flexibility to work remotely or from Ahmedabad/Rajkot Supportive, collaborative, and inclusive team culture Competitive salary with opportunities for growth and upskilling Skills: aws lambda,serverless architecture,observability tools,behavior-driven development (bdd),amazon sqs,go (golang),domain-driven design (ddd),golang,aws cloud services,restful apis,aws,gitlab ci,microservices architecture,github actions,aws cwi,automated testing frameworks,microservices,kubernetes,docker,aws (do not use tag amazon web services)

Posted 1 day ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 day ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 day ago

Apply

7.0 years

5 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and maintain scalable, robust, and secure backend services using Scala and Java Architect and implement microservices using the Play Framework Deploy and manage applications in Kubernetes on AWS Integrate backend services with PostgreSQL databases and data processing systems Utilize Datadog for monitoring, logging, and performance optimization Work with AWS services, including Elastic Beanstalk, for deployment and management of applications Use GitHub for version control and collaboration Lead and participate in the complete software development life cycle (SDLC), including planning, development, testing, and deployment Troubleshoot, debug, and upgrade existing software Document the backend process to aid in future upgrades and maintenance Perform code reviews and mentor junior developers Collaborate with cross-functional teams to define, design, and ship new features Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in software development using Scala and Java Technical Skills: Experience with Datadog for monitoring, logging, and performance optimization Experience with PostgreSQL databases Experience with Agile methodologies (Scrum, Test Driven Development, Continuous Integration) Solid proficiency in Scala and Java programming languages Extensive experience with the Play Framework for building microservices Proficiency in deploying and managing applications in Kubernetes on AWS Proficiency in AWS services, including Elastic Beanstalk Familiarity with version control systems, particularly GitHub Solid understanding of data structures and algorithms Soft Skills: Excellent problem-solving and analytical skills Solid communication and collaboration skills Ability to work independently and as part of a team Leadership skills and experience mentoring junior developers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 day ago

Apply

5.0 years

15 - 20 Lacs

Thiruvananthapuram

On-site

Job Description: Designation: Senior Full Stack Developer (Python+ Angular + GCP/AWS/Azure) Qualification: Any UG / PG Degree / Computer / Engineering Graduates Experience: Min. 5+ Years Gender: Male / Female Job Location: Trivandrum / Kochi (KERALA) Job Type: Full Time | Day Shift | Permanent Job | Sat & Sun Week Off Working Time: 12:01 PM to 9:00 PM Project: European client | Shift: Mid Shift (12:01PM TO 9:00PM) | WFO Salary: Rs.15,00,000 to 20,00,000 LPA Introduction We are looking for a Senior Full stack (Python & Angular) Developer who will take ownership of building and maintaining complex backend systems, APIs, and applications using Python and for frontend with Angular Js. Profiles with BFSI - Payment system integrations experience is desired. Responsibilities include: Design, develop, and maintain backend applications, APIs, and services using Python. Write clean, maintainable, and scalable code following industry standards and best practices. Optimize application performance and ensure high availability and scalability. Review code and mentor junior developers to ensure code quality and foster knowledge sharing. Implement unit and integration tests to ensure application robustness. Set up and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI . Collaborate with DevOps to deploy applications on cloud platforms, preferably Google Cloud Platform (GCP). Design and build cloud-native applications using APIs, containers, and Kubernetes. Leverage GCP services to develop scalable and efficient solutions. Ensure application security, manage access controls, and comply with data privacy regulations. Work closely with frontend developers, DevOps engineers, and product managers for seamless project delivery. Design, manage, and optimize relational and NoSQL databases (PostgreSQL, MySQL, MongoDB). Monitor application performance using tools like Prometheus, Grafana, or Datadog. Build dynamic, responsive UIs using Angular and JavaScript . Develop and maintain reusable Angular components in collaboration with UX/UI teams . Primary Skills: 5+ years of experience as a Python developer, with a focus on Product development (BE+FE development). Hands on experience in Angular Js. Proven experience in designing and deploying scalable applications and microservices. App Integration experience is prefferd. Python- FastAPI (Flask/Django) API Development (RESTful Services) Cloud Platforms – Google Cloud Platform (GCP)prefferd. Familiarity with database management systems– PostgreSQL, MySQL, MongoDB and ORMs (e.g., SQLAlchemy, Django ORM). Knowledge of CI/CD pipelines – Jenkins, GitLab CI, CircleCI Frontend Development – JavaScript, Angular Code Versioning – Git Testing – Unit & Integration Testing Strong understanding of security principles, authentication (OAuth2, JWT), and data protection. Secondary Skills: Monitoring Tools – Prometheus, Grafana, Datadog Security and Compliance Standards – GDPR, PCI, Soc2 DevOps Collaboration UX/UI Collaboration for Angular components Experience with asynchronous programming (e.g., asyncio, Aiohttp). Experience with big data technologies like Spark or Hadoop. Experience with machine learning libraries (e.g., TensorFlow, PyTorch) is a plus. Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Work Location: In person

Posted 1 day ago

Apply

Exploring Datadog Jobs in India

Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and are actively hiring for Datadog roles.

Average Salary Range

The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.

Related Skills

In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.

Interview Questions

  • What is Datadog and how does it differ from other monitoring tools? (basic)
  • How do you set up custom metrics in Datadog? (medium)
  • Explain how you would create a dashboard in Datadog to monitor server performance. (medium)
  • What are some key features of Datadog APM (Application Performance Monitoring)? (advanced)
  • Can you explain how Datadog integrates with Kubernetes for monitoring? (medium)
  • Describe how you would troubleshoot an alert in Datadog. (medium)
  • How does Datadog handle metric aggregation and visualization? (advanced)
  • What are some best practices for using Datadog to monitor cloud infrastructure? (medium)
  • Explain the difference between Datadog Logs and Datadog APM. (basic)
  • How would you set up alerts in Datadog for critical system metrics? (medium)
  • Describe a challenging problem you faced while using Datadog and how you resolved it. (advanced)
  • What is anomaly detection in Datadog and how does it work? (medium)
  • How does Datadog handle data retention and storage? (medium)
  • What are some common integrations with Datadog that you have worked with? (basic)
  • Can you explain how Datadog handles tracing for distributed systems? (advanced)
  • Describe a recent project where you used Datadog to improve system performance. (medium)
  • How do you ensure data security and privacy when using Datadog? (medium)
  • What are some limitations of Datadog that you have encountered in your experience? (medium)
  • Explain how you would use Datadog to monitor network traffic and performance. (medium)
  • How does Datadog handle auto-discovery of services and applications for monitoring? (medium)
  • What are some key metrics you would monitor for a web application using Datadog? (basic)
  • Describe a scenario where you had to scale monitoring infrastructure using Datadog. (advanced)
  • How would you implement anomaly detection for a specific metric in Datadog? (medium)
  • What are some best practices for setting up alerts and notifications in Datadog? (medium)

Closing Remark

With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies