Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 8 years Location: Bengaluru JobType: full-time Requirements As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations Define clear problem statements and technical requirements by aligning business goals with AI research objectives Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes) Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput Conduct regular performance tuning and cost analysis to maintain operational efficiency Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals Required Qualification Bachelor's or Master's degree in Computer Science, Engineering, or a related field 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems Proven track record of architecting and deploying production AI applications at scale Strong programming skills in Python and one or more of Java, Go, or C++ Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering Expertise in CI/CD, automated testing frameworks, and MLOps best practices Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar) Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana) Publications or patents in AI/ML or related conference presentations Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML) Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability What we offer? Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth Competitive compensation, comprehensive benefits, and equity options Flexible work arrangements and support for professional development Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
SDET -Cypress -Automation Engineers Roles & Responsibilities Test Rich UI applications and implement Test Automation for it. Automate Test cases using Cypress. Create test cases, test plans and define test strategies. Also automate api using Python . Skills 4+ years of experience in automation and manual testing. Good Understanding of web application testing and API testing. Experience in Cypress or Playwright framework using JavaScript Exposure to Python language and automation using any framework with Python. In-depth knowledge of a variety of testing techniques and methodologies Experience setting up a CI solution using Github actions Understanding of Docker containers Jmeter or Load testing experience, in general, using another load testing tool. API Automation Experience. Experience in using Postman Expertise in agile testing methodology and ALM tools such as JIRA. Excellent organization, communication, and interpersonal skills Strong analytical and problem-solving skills with the ability to work in an unstructured, fast-paced environment. Preferred Skills Experience with Datadog or any other similar tool. Page performance Web page test API scripting (or similar tool. Ability to write small test scripts as needed. (python or Curl experience.) Any AI automation creation or UI approach for manual testers to help create the automation tests. Experience with working with manual teams to debug scripts, train manual engineers to run the automation, conduct reviews of automation being run, and make recommendations to project teams. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About The Role Are you a detail-oriented problem solver with a keen analytical mind? Do you enjoy unraveling complex systems and ensuring software runs flawlessly? If so, we have an exciting challenge for you! We are looking for a QA Engineer to join our team and play a key role in maintaining the quality and performance of our Driivz products-a leading platform in the electric vehicle charging ecosystem. As part of our team, you'll be responsible for identifying issues, ensuring seamless functionality, and helping to shape the future of sustainable mobility. Your contribution Respond promptly to customer inquiries in different communication channels e.g. ticketing system, calls, etc. Understand and troubleshoot all reported bugs and incidents and provide feedback to the customer and work closely with Driivz internal teams (R&D, Product, CSM) Escalate issues in a timely manner to a higher support level when needed Maintain a positive and professional attitude towards clients Learn our product inside out to address technical issues in a timely and professional manner. Must have Professional working proficiency in English (Required). Working knowledge of Linux OS. Experience in Cloud-Based Services (e.g. AWS, GCP). Knowledge and previous experience in SQL Experience in supporting remote devices (e.g. network access and configuration, device setup, work models, etc.); Experience in reproducing customers’ issues and leading debug sessions with customers or R&D. Proficiency on Monitoring Tools. Ex – DataDog, Kibana, Prometheus or any other. Work experience in customer support in the tech industry (min. 3 years) Experience working with offshore teams (min. 2 years) Considered an advantage Bachelor’s degree in Computer Science or Engineering Knowledge and previous experience in Zendesk Ticketing system Who Is Gilbarco Veeder-root Gilbarco Veeder-Root, a Vontier company, is the worldwide technology leader for retail and commercial fueling operations, offering the broadest range of integrated solutions from the forecourt to the convenience store and head office. For over 150 years, Gilbarco has earned the trust of its customers by providing long-term partnership, uncompromising support, and proven reliability. Major product lines include fuel dispensers, tank gauges and fleet management systems. Who Is Vontier Vontier (NYSE: VNT) is a global industrial technology company uniting productivity, automation and multi-energy technologies to meet the needs of a rapidly evolving, more connected mobility ecosystem. Leveraging leading market positions, decades of domain expertise and unparalleled portfolio breadth, Vontier enables the way the world moves – delivering smart, safe and sustainable solutions to our customers and the planet. Vontier has a culture of continuous improvement and innovation built upon the foundation of the Vontier Business System and embraced by colleagues worldwide. Additional information about Vontier is available on the Company’s website at www.vontier.com. At Vontier, we empower you to steer your career in the direction of success with a dynamic, innovative, and inclusive environment. Our commitment to personal growth, work-life balance, and collaboration fuels a culture where your contributions drive meaningful change. We provide the roadmap for continuous learning, allowing creativity to flourish and ideas to accelerate into impactful solutions that contribute to a sustainable future. Join our community of passionate people who work together to navigate challenges and seize opportunities. At Vontier, you are not on this journey alone-we are dedicated to equipping you with the tools and support needed to fuel your innovation, lead with impact, and thrive both personally and professionally. Together, let’s enable the way the world moves! Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role:Performance Monitoring (Loadmeter/Jmeter) Experience Range: 4 – 8 Years Location: chennai/Pune/Hyderabad candidates should come to office for Walk in Drive(Face to face Interview) Weekend Walk-in Drive: 14-June-25 (Saturday) Timing: 9:30AM to 12:30PM Must-Have Good experience using Performance Test tool LoadRunner and understanding of APM tools like AppDynamics/Dynatrace/New Relic, etc Good hands-on experience in Web-HTTP, Java Vuser, Webservice protocol Should have ability to work independently in Requirement analysis, designing, execution & result analysis phase. Develop customized codes in Java & C language for optimizing and enhancing VuGen scripts. Analyze test results and coordinate with development teams for issue triaging & bug fixes. Good understanding of different OS internals, file systems, disk/storage, networking protocols and other latest technologies like Cloud infra.· Monitor/extract production performance statistics and apply the same model in the test environments with higher load to uncover performance issues. Must have experience in monitoring DB and highlight performance issues. Good to have experience working on Finance – Banking domain projects. Technical Skills · LoadRunner -HTTP/HTML/Webservices protocol/Java Protocol, . Monitoring Tools: AppDynamics/ Dynatrace/ CloudWatch/ Splunk/ Kibana/ Grafana/ Datadog Database – SQL or Mongo Unix basics Good understanding of cloud concepts – AWS/Azure Interested candidates pls share your cv to mailid: c.nayana@tcs.com with subject " Performance Monitoring (Loadmeter/Jmeter)" for further Discussion Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurgaon
On-site
Performance Quality Engineer Gurgaon, India; Hyderabad, India Information Technology 316150 Job Description About The Role: Grade Level (for internal use): 09 Role: Performance Quality Engineer The Team Quality Engineering team works in partnership with other functions in Technology & the business to deliver quality products by providing software testing services and quality assurance, that continuously improve our customer’s ability to succeed. The team is independent in driving all decisions and is responsible for the architecture, design and quick turnaround in development of our products with high quality. The team is located globally. The Impact You will ensure the quality of our deliverable meets and exceeds the expectations of all stakeholders and evangelize the established quality standards and processes. Your challenge will be reducing the “the time to market” for products without compromising the quality, by leveraging technology and innovation. These products are directly associated to revenue growth and operations enablement. You strive to achieve personal objectives and contribute to the achievement of team objectives, by working on problems of varying scope where analysis of situations and/or data requires a review of a variety of factors. What’s in it for you Do you love working every single day testing enterprise-scale applications that serve a large customer base with growing demand and usage? Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company’s strategy. You will use a wide range of technologies and have the opportunity to interact with different teams internally. You will also get a plenty of learning and skill-building opportunities with participation in innovation projects, training and knowledge sharing. You will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities Understand application architecture, system environments (ex: shared resources, components and services, CPU, memory, storage, network, etc.) to troubleshoot production performance issues. Ability to perform scalability & capacity planning . Work with multiple product teams to design, create, execute, and analyze performance tests; and recommend performance turning. Support remediating performance bottlenecks of application front-end and database layers. Drive industry best practices in methodologies and standards of performance engineering, quality and CI/CD process. Understand user behaviors and analytics models and experience in using Kibana and Google analytics Ensure optimally performing production applications by establishing application and transaction SLAs for performance, implementing proactive application monitoring, alarming and reporting, and ensuring adherence to and measurement against defined SLA. Analyzes, designs and develops performance specifications and scripts based on workflows. Ability to interpret Network/system diagram, results of performance tests and identify improvements. Leverage tools and frameworks to develop performance scripts with quality code to simplify testing scenarios. Focus on building efficient solutions for Web, Services/APIs, Database, mobile performance testing requirements. Deliver projects in the performance testing space and ensure delivery efficiency. Define testing methodologies & implement tooling best practices for continuous improvement and efficiency. Understand business scenarios in depth to define workload modelling for different scenarios. Compliment architecture community by providing inputs & pursue implementation suggested for optimization. Competency to manage testing for highly integrated system with multiple dependencies and moving parts. Active co-operation/collaboration with the teams at various geographic locations. Provide prompt response and support in resolving critical issues (along with the development team). May require after hours/weekend work for production implementations. What we’re looking for: Bachelor's/PG degree in Computer Science, Information Systems or equivalent. 3-5 years of experience in Performance testing/Engineering or development with good understanding of performance testing concepts. Experience in performance testing tools like Microfocus Storm Runner/ LoadRunner/Performance Center, JMeter. Protocol : Web(HTTP/HTML) , Ajax Truclient, Citrix, .Net Programming Language : Java, C#, .Net, Python Working Experience in CI/CD for performance testing. Debugging tools: Dev Tools, Network Sniffer and Fiddler etc. Experience in monitoring, profiling and tuning tools e.g. CA Wily Introscope, AppDynamics, Dynatrace, Datadog, Splunk etc. Experience in gathering Non-Functional Requirements (NFR) & strategy to achieve NFR and developing test plans. Experience in testing and optimizing high volume web and batch-based transactional enterprise applications. Strong communication skills and ability to produce clear, concise and detailed documentation. Excellent problem solving, analytical and technical troubleshooting skills. Experience in refactoring test performance suites as necessary. Preferred Qualifications: Bachelor's or higher degree in technology related field. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316150 Posted On: 2025-06-04 Location: Gurgaon, Haryana, India
Posted 1 week ago
3.0 years
1 - 5 Lacs
Bengaluru
On-site
There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Implements infrastructure, configuration, and network as code for the applications and platforms in your remit Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on Site Reliability concepts and 3+ years applied experience Knowledge of one or more general-purpose programming languages or automation scripting (Python, UNIX shell scripting, etc.). Experience supporting public/private cloud-based applications. Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Intermediate understanding of one programming language such as Python, Java should be able to dive into code and recommend developers for performance optimization and error fixing. Knowledge of source code management tools like Git, Bitbucket, and CI/CD tools like Jenkins. Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation. Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team. Ability to initiate and implement ideas to solve business problems and shift left towards SRE. Preferred qualifications, capabilities, and skills * Familiarity with container and container orchestration such as Kubernetes, ECS
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru
On-site
JOB DESCRIPTION There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Corporate technology, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Write high-quality , maintainable, and well-tested software to develop reliable and repeatable solutions to complex problems. Collaborate with product development teams to design, implement and manage CI/CD pipelines to support reliable, scalable, and efficient software delivery. Partner with product development teams to capture and define meaningful service level indicators (SLIs) and service level objectives (SLOs). Develop and maintain monitoring, alerting, and tracing systems that provide comprehensive visibility into system health and performance. Contribute to design reviews to evaluate and strengthen architectural resilience, fault tolerance and scalability. Uphold incident response management best practices, champion blameless postmortems and continuous improvements. Debug, track, and resolve complex technical issues to maintain system integrity and performance. Champion and drive the adoption of reliability and resiliency best practices Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on Site Reliability Engineer concepts and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and Go Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Solid understanding of networking concepts, including TCP/IP, routing, firewalls, and DNS. In-depth knowledge of Unix/Linux, including performance tuning, process and memory management, and filesystem operations. Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team Preferred qualifications, capabilities, and skills Practical experience in building, supporting, and troubleshooting JVM-based applications, using tools like JConsole or VisualVM, and supporting SQL and in-memory database technologies. Experience working in the financial/fin-tech industry, with knowledge of performance and chaos testing tools such as Gremlin, Chaos Mesh, and LitmusChaos. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We’re looking for a Data Engineer II to join Procore’s Product & Technology Team. Procore software solutions aim to improve the lives of everyone in construction and the people within Product & Technology are the driving force behind our innovative, top-rated global platform. We’re a customer-centric group that encompasses engineering, product, product design and data, security and business systems. Data engineers are responsible for implementing critical projects including the design and operation of Procore's streaming and batching data processing pipelines and creating domain benchmarks and insights etc. We're looking for a motivated engineer with at least 2 years of experience. You must be comfortable operating in a high autonomy environment and deploying technologies that are new to our organization. drive solutions to wide-ranging data engineering and infrastructure challenges for product and internal operations. You will partner with world-class developers, engineers, architects, and data scientists to drive thinking, provide technical leadership, and collaborate in defining best practices around data engineering. You will also work alongside local product management, engineering, and research teams to develop innovative solutions that will influence our product line. Examples of our projects: An ETL pipeline for our data lake consisting of batch processing, orchestration with Airflow, monitoring with Datadog and alerting with Slack A Maven package used by all of Product Dev teams for building Kafka consumers with built in support for configuration, error reporting, monitoring, deserialization, gRPC, Spark, Flink, and Kubernetes A multi-stage data lake including landing, process and serving zone Some Of Your Responsibilities Include Partner with teams on modeling and analysis problems – from transforming problem statements into analysis problems, to working through data modeling and engineering, to analysis and communication of results Conduct code reviews, design, and best practices Use experience gained in the above and expertise in this space to influence our product roadmap, potentially working with prototype engineering team to add additional capabilities to our products to solve more of these problems Who You Are... 2+ years of experience in a Data/ML Engineer role Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field or equivalent relevant experience. Expertise building data pipelines (in Real-time and batch) on large complex datasets using Spark or Flink frameworks Experience with AWS services including EC2, S3, Glue, EMR, RDS, Snowflake, Elastic Search, Cassandra and Data pipeline/streaming tools (Airflow, NiFi, Kafka) Experience building and optimizing data pipelines, architectures and data sets. A successful history of manipulating, processing and extracting value from large disconnected datasets. Deep knowledge of stream processing using Kafka and highly scalable ‘big data’ data stores. Team Player. Experience supporting and working with cross-functional teams in a dynamic environment. Experience of End-to-end data quality control and automated testing experience Preferred Experience with unstructured data (PDF, contract, plan, image) Data transformation (quality, extraction) Experience in working within team handling all the data pipeline from extraction to Data warehouse Additional Information Perks & Benefits At Procore, we invest in our employees and provide a full range of benefits and perks to help you grow and thrive. From generous paid time off and healthcare coverage to career enrichment and development programs, learn more details about what we offer and how we empower you to be your best. About Us Procore Technologies is building the software that builds the world. We provide cloud-based construction management software that helps clients more efficiently build skyscrapers, hospitals, retail centers, airports, housing complexes, and more. At Procore, we have worked hard to create and maintain a culture where you can own your work and are encouraged and given resources to try new ideas. Check us out on Glassdoor to see what others are saying about working at Procore. We are an equal-opportunity employer and welcome builders of all backgrounds. We thrive in a dynamic and inclusive environment. We do not tolerate discrimination against candidates or employees on the basis of gender, sex, national origin, civil status, family status, sexual orientation, religion, age, disability, race, traveler community, status as a protected veteran or any other classification protected by law. If you'd like to stay in touch and be the first to hear about new roles at Procore, join our Talent Community. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact our benefits team here to discuss reasonable accommodations. Show more Show less
Posted 1 week ago
0 years
0 - 1 Lacs
Bengaluru
On-site
Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. Okta is seeking an experienced Software Test Engineer for our Identity Management Quality Engineering team. As part of product quality engineering, they need to ensure product releases are done with highest quality and reliability. Automation at every level is key for faster, robust and secure releases. The ideal candidate has solid experience in Java automation development, worked on scaling environments and has shown a passion to learn. Job Duties and Responsibilities: Automate API tests, End-to-End tests, reliability/scale tests Review requirements and design specs to develop relative test plans and test cases Work with engineering management to scope and plan engineering efforts Clearly communicate and document QE plans for scrum teams to review and comment Automate all critical features to maintain zero-debt cadence Release features with solid quality to customers Respond to Production issues/alerts and customer issues during on-call rotation Help with mentoring new hires and interns Minimum REQUIRED Knowledge, Skills, and Abilities: 6 months+ years of quality engineering with knowledge and hands-on test automation experience 6 months+ years experience with Selenium and/or API testing using Java 6 months+ years experience with Performance testing using Jmeter 6 months+ years of experience with (Splunk, Datadog, Grafana), SQL, Unix Experience with development of high-quality automation and software tests Ability to test software with minimum supervision and guidance Ability to quickly learn new technologies and provide input Education and Training: B.S. Computer Science or related field #LI-ASITRAY What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Okta The foundation for secure connections between people and technology Okta is the leading independent provider of identity for the enterprise. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 7,000 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. More than 19,300 organizations, including JetBlue, Nordstrom, Slack, T-Mobile, Takeda, Teach for America, and Twilio, trust Okta to help protect the identities of their workforces and customers.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
India
Remote
Python JD: Role Summary: We are seeking a skilled Python Developer with strong experience in data engineering, distributed computing, and cloud-native API development. The ideal candidate will have hands-on expertise in Apache Spark, Pandas, and workflow orchestration using Airflow or similar tools, along with deep familiarity with AWS cloud services. You’ll work with cross-functional teams to build, deploy, and manage high-performance data pipelines, APIs, and ML integrations. Key Responsibilities: Develop scalable and reliable data pipelines using PySpark and Pandas. Orchestrate data workflows using Apache Airflow or similar tools (e.g., Prefect, Dagster, AWS Step Functions). Design, build, and maintain RESTful and GraphQL APIs that support backend systems and integrations. Collaborate with data scientists to deploy machine learning models into production. Build cloud-native solutions on AWS, leveraging services like S3, Glue, Lambda, EMR, RDS, and ECS. Support microservices architecture with containerized deployments using Docker and Kubernetes. Implement CI/CD pipelines and maintain version-controlled, production-ready code. Required Qualifications: 3–5 years of experience in Python programming with a focus on data processing. Expertise in Apache Spark (PySpark) and Pandas for large-scale data transformations. Experience with workflow orchestration using Airflow or similar platforms. Solid background in API development (RESTful and GraphQL) and microservices integration. Proven hands-on experience with AWS cloud services and cloud-native architectures. Familiarity with containerization (Docker) and CI/CD tools (GitHub Actions, CodeBuild, etc.). Excellent communication and cross-functional collaboration skills. Preferred Skills: Exposure to infrastructure as code (IaC) tools like Terraform or CloudFormation. Experience with data lake/warehouse technologies such as Redshift, Athena, or Snowflake. Knowledge of data security best practices, IAM role management, and encryption. Familiarity with monitoring/logging tools like Datadog, CloudWatch, or Prometheus. Pyspark, Pandas, Data Transformation or Workflow experience is a MUST atleast 2 years Pay: Attractive Salary Interested candidate can call or whats app the resume @ 9092626364 Job Type: Full-time Benefits: Cell phone reimbursement Work from home Schedule: Day shift Weekend availability Work Location: In person
Posted 1 week ago
10.0 years
0 Lacs
Noida
On-site
Company Summary: DISH Network Technologies India Pvt. Ltd is a technology subsidiary of EchoStar Corporation. Our organization is at the forefront of technology, serving as a disruptive force and driving innovation and value on behalf of our customers. Our product portfolio includes Boost Mobile (consumer wireless), Boost Mobile Network (5G connectivity), DISH TV (Direct Broadcast Satellite), Sling TV (Over The Top service provider), OnTech (smart home services), Hughes (global satellite connectivity solutions) and Hughesnet (satellite internet). Our facilities in India are some of EchoStar’s largest development centers outside the U.S. As a hub for technological convergence, our engineering talent is a catalyst for innovation in multimedia network and communications development. Summary: Boost Mobile is our cutting-edge, standalone 5G broadband network that covers over 268 million Americans and a brand under EchoStar Corporation (NASDAQ: SATS). Our mobile carrier’s cloud-native O-RAN 5G network delivers lightning-fast speeds, reliability, and coverage on the latest 5G devices. Recently, Boost Mobile was named as the #1 Network in New York City, according to umlaut’s latest study! Job Duties and Responsibilities: Key Responsibilities: Manages public cloud infrastructure deployments, handles Jira, and troubleshoots Leads DevOps initiatives for US customers, focusing on AWS and 5G network functions Develops technical documentation and supports root cause analysis Deploys 5G network functions in AWS environments Expertise in Kubernetes and EKS for container orchestration Extensive experience with AWS services (EC2, ELB, VPC, RDS, DynamoDB, IAM, CloudFormation, S3, CloudWatch, CloudTrail, CloudFront, SNS, SQS, SWF, EBS, Route 53, Lambda) Orchestrates Docker containers with Kubernetes for scalable deployments Automates 5G Application deployments using AWS CodePipeline (CodeCommit/CodeBuild/CodeDeploy) Implements and operates containerized cloud application platform solutions Focuses on cloud-ready, distributed application architectures, containerization, and CI/CD pipelines Works on automation and configuration as code for foundational architecture related to connectivity across Cloud Service Providers Designs, configures, and manages cloud infrastructures using AWS services Experienced with EC2, ELB, EMR, S3 CLI, and API scripting Strong knowledge of Kubernetes operational building blocks (Kube API, Kube Scheduler, Kube Controller Manager, ETCD) Provides solutions to common Kubernetes errors (CreateContainerConfigError, ImagePullBackOff, CrashLoopBackOff, Kubernetes Node Not Ready) Knowledgeable in Linux/UNIX administration and automation Familiar with cloud and virtualization technologies (Docker, Azure, AWS, VMware) Supports cloud-hosted systems 24/7, including troubleshooting and root cause analysis Configures Kubernetes clusters for networking, load balancing, pod security, and certificate management Configures monitoring tools (Datadog, Dynatrace, AppDynamics, ELK, Grafana, Prometheus) Participates in design reviews of architecture patterns for service/application deployment in AWS Skills - Experience and Requirements: Education and Experience: Bachelors or Master's degree in Computer Science, Computer Engineering, or a related technical degree 10+ years related experience; or equivalent combination of education and experience 4+ years of experience supporting public cloud platforms 4+ years of experience with cloud system integration, support, and automation Skills and Qualifications: Must have excellent verbal and written communication Operational experience with Infrastructure as code solutions and tools, such as Ansible, Terraform, and Cloudformation Deep understanding of DevOps and agile methodologies Ability to work well under pressure and manage tight deadlines Proven track record of operational process change and improvement Deep understanding of distributed systems and microservices AWS certifications (Associate level or higher) are a plus Kubernetes certifications are a plus
Posted 1 week ago
3.0 - 6.0 years
5 - 5 Lacs
Noida
On-site
Location: Noida Experience: 3 to 6 years No. Of Openings: 1 Job Description Work on a team building cloud platform tools and solutions for HPC applications. Collaborate with other engineers to define strategy and technical platform roadmap, and drive the rapid implementation of appropriate technologies. Encourage value-driven innovation in the current framework and processes to continuously improve the efficiency of product development processes. Partner with client teams to prepare for the timely and smooth acceptance of deliverables into a production environment. Evaluate new tools and technologies based on current and future feature requirements, performance, cost effectiveness, and reliability. Work closely with development teams to understand requirements and apply industry knowledge to recommend build/buy solutions. Execution on all release engineering aspects of DevOps, including Configuration Management, Build and Deployment Management, Continuous Integration and Delivery. Review existing solutions with a fresh perspective to suggest improvements and optimizations. Job Specification Technologies We Use- Amazon AWS, Azure & Azure Devops, GCP, Kubernetes, Helm, Python, Terraform, PostgreSQL, Jenkins, Ubuntu Linux, Windows Server, Splunk, PagerDuty, Grafana, Prometheus, Bicep, CloudFormation, DataDog, ElasticSearch BS or MS in Computer Science or related technical discipline (or equivalent experience). Experience with cloud delivery platforms: AWS, Azure, GCP. Hands-on experience with one or more programming languages such as Python. Working knowledge of running and tuning large-scale applications in production. Hands-on experience with Kubernetes. Hands-on experience with CI/CD tooling such as Jenkins. Attention to detail in their code and output. Attention to operational excellence. Strong interpersonal skills to coordinate with other team members. Experience of coaching and mentoring software engineers for technical and professional growth a plus.
Posted 1 week ago
5.0 - 8.0 years
7 - 8 Lacs
Ahmedabad
On-site
Senior Full Stack Developer (Python, JavaScript, AWS, Cloud Services, Azure) Ahmedabad, India; Hyderabad, India Information Technology 315432 Job Description About The Role: Grade Level (for internal use): 10 The Team: S&P Global is a global market leader in providing information, analytics and solutions for industries and markets that drive economies worldwide. The Market Intelligence (MI) division is the largest division within the company. This is an opportunity to join the MI Data and Research’s Data Science Team which is dedicated to developing cutting-edge Data Science and Generative AI solutions. We are a dynamic group that thrives on innovation and collaboration, working together to push the boundaries of technology and deliver impactful solutions. Our team values inclusivity, continuous learning, and the sharing of knowledge to enhance our collective expertise. Responsibilities and Impact: Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development. Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3. Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions. Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315432 Posted On: 2025-06-02 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Key Responsibilities Build and maintain backend services in Python, writing clean, maintainable, and well-tested code. Develop and scale public APIs, ensuring high performance and reliability. Work with GraphQL services, contributing to schema design and implementation of queries, mutations, and resolvers. Collaborate cross-functionally with frontend, product, and DevOps teams to ship features end-to-end. Containerize services using Docker and support deployments within Kubernetes environments. Use GitHub Actions to manage CI/CD workflows, including test automation and deployment pipelines. Participate in code reviews, standups, and planning sessions as part of an agile development process. Take ownership of features and deliverables with guidance from senior engineers. Required Skills Python expertise: Strong grasp of idiomatic Python, async patterns, type annotations, unit testing, and modern libraries. API development: Experience building and scaling RESTful and/or GraphQL APIs in production. GraphQL proficiency: Familiarity with frameworks like Strawberry, Graphene, or similar. Containerization: Hands-on experience with Docker and container-based development workflows. GitHub Actions CI/CD: Working knowledge of GitHub Actions for automating tests and deployments. Team collaboration: Effective communicator with a proactive, self-directed work style. Preferred Qualifications Kubernetes: Experience deploying or troubleshooting applications in Kubernetes environments. AWS: Familiarity with AWS services such as ECS, EKS, S3, RDS, or Lambda. Healthcare: Background in the healthcare industry or building patient-facing applications. Monitoring and security: Familiarity with observability tools (e.g., Datadog, Prometheus) and secure coding practices About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Spreetail propels brands to increase their ecommerce market share across the globe while improving their operational costs. Learn how we are building one of the fastest-growing ecommerce companies in history: www.spreetail.com . As a Principal Software Engineer, you’ll lead a cross-functional team to build and scale Merch Tech’s data-driven platforms that drive decision-making for hundreds of vendors. You’ll influence product, strategy, and tech direction, and collaborate with executive stakeholders in Merchandising, Supply Chain, and Brand Management. This position is remote. This position will be remote in the country of India. How You Will Achieve Success Ownership of BEx (Brand Experience Platform) roadmap and execution to increase adoption rate & reduce issues in data and availability. Building scalable backend systems and usable front-end experiences that increase adoption and drive usability. Improving UI/UX by reducing latency, implementing data consistency, and alerting mechanisms. Driving measurable impact on GMV, return rates, and EBITDA impact in implementing scalable solutions across the merchandising organization. Leverage the latest AI technologies to accelerate development work and set up automation for unit testing. Lead the charge for Agentic AI deployment. Establishing a culture of fast experimentation and tight feedback loops; managing the team to implement quick MVPs and scaling solutions that work. What Experiences Will Help You In This Role 8–12 years in software engineering, including experience in platform ownership or growth-stage environments. Full-stack experience in Python, SQL, Node.js , and React. Along with this, we are looking for experience in Datadog or similar. 80% hands on development and 20% management/ Experience in data platform engineering, front-end/backend development, and AWS-based infrastructure. Prior experience delivering reporting or workflow automation platforms is a plus. Strong ability to partner with non-tech stakeholders and drive measurable business outcomes. Comfortable with ambiguity and fast iteration cycles. A nice to have is Java. This is a remote position and requires candidates to have an available work-from-home setup: Desktop/Laptop system requirements: 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable, or fiber wired internet service with a 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution $60,000 - $80,000 a year This is a remote position and requires candidates to have an available work-from-home setup Desktop/Laptop System Requirements 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable or fiber wired internet service with 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution. Please be aware of scammers. Spreetail will only contact you through Lever or the spreetail.com domain. Spreetail will never ask candidates for money during the recruitment process. Please reach out to careers@spreetail.com directly if you have any concerns. Emails from @spreetailjobs.com are fraudulent. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Spreetail propels brands to increase their ecommerce market share across the globe while improving their operational costs. Learn how we are building one of the fastest-growing ecommerce companies in history: www.spreetail.com . As a Software Development Manager, you’ll lead a cross-functional team to build and scale Merch Tech’s data-driven platforms that drive decision-making for hundreds of vendors. You’ll influence product, strategy, and tech direction, and collaborate with executive stakeholders in Merchandising, Supply Chain, and Brand Management. This position is remote. This position will be remote in the country of India. How You Will Achieve Success Ownership of BEx (Brand Experience Platform) roadmap and execution to increase adoption rate & reduce issues in data and availability. Building scalable backend systems and usable front-end experiences that increase adoption and drive usability. Improving UI/UX by reducing latency, implementing data consistency, and alerting mechanisms. Driving measurable impact on GMV, return rates, and EBITDA impact in implementing scalable solutions across the merchandising organization. Leverage the latest AI technologies to accelerate development work and set up automation for unit testing. Lead the charge for Agentic AI deployment. Establishing a culture of fast experimentation and tight feedback loops; managing the team to implement quick MVPs and scaling solutions that work. What Experiences Will Help You In This Role 8–12 years in software engineering, including experience in platform ownership or growth-stage environments. Full-stack experience in Python, SQL, Node.js , and React. Along with this, we are looking for experience in Datadog or similar. 80% hands on development and 20% management/ Experience in data platform engineering, front-end/backend development, and AWS-based infrastructure. Prior experience delivering reporting or workflow automation platforms is a plus. Strong ability to partner with non-tech stakeholders and drive measurable business outcomes. Comfortable with ambiguity and fast iteration cycles. A nice to have is Java. This is a remote position and requires candidates to have an available work-from-home setup: Desktop/Laptop system requirements: 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable, or fiber wired internet service with a 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution $60,000 - $80,000 a year This is a remote position and requires candidates to have an available work-from-home setup Desktop/Laptop System Requirements 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable or fiber wired internet service with 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution. Please be aware of scammers. Spreetail will only contact you through Lever or the spreetail.com domain. Spreetail will never ask candidates for money during the recruitment process. Please reach out to careers@spreetail.com directly if you have any concerns. Emails from @spreetailjobs.com are fraudulent. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Who we are: BigID is an innovative tech startup that focuses on solutions for data security, compliance, privacy, and AI data management. We're leading the market in all things data: helping our customers reduce risk, drive business innovation, achieve compliance, build customer trust, make better decisions, and get more value from their data. We are building a global team passionate about innovation and next-gen technology. BigID has been recognized for: BigID Named Hot Company in Artificial Intelligence and Machine Learning at the 2024 Global InfoSec Awards Citizens JMP Cyber 66 List of Hottest Privately Held Cybersecurity Companies CRN 100 list named BIgID as one of the 20 Coolest Identity Access Management and Data Protection Companies Of 2024 (2 years running) DUNS 100 Best Tech Companies to Work For in 2024 Top 3 Big Data and AI Vendors to Watch' in the 2023 BigDATAwire Readers and Editors Choice Awards 2024 Inc. 5000 list for the 4th consecutive year! Shortlisted for the 2024 AI Awards in the category of Best Use of AI in Cybersecurity At BigID, our team is the foundation of our success. Join a people-centric culture that is fast-paced and rewarding: you’ll have the opportunity to work with some of the most talented people in the industry who value innovation, diversity, integrity, and collaboration. Who we seek: We’re looking for a Site Reliability Engineer to join our Engineering team in Hyderabad. The ideal candidate will be responsible for monitoring and responding to system alerts, with experience in tools such as Datadog. They should also be proficient in efficiently analysing logs across various dashboards. What you’ll do: Metrics : Implement comprehensive service metrics to track and report on system reliability, performance, and efficiency Optimization : Monitor system performance, identify bottlenecks, and execute pipeline optimization Collaborate with Scrum teams and other stakeholders to identify potential risks. Analysis : Conduct post-incident reviews to prevent recurrence and refine the system reliability framework What you’ll bring: A bachelor's or master's degree in computer science, information systems, or a related technical field Between 4- 7 years of experience as a Site Reliability Engineer Proficiency in programming languages such as Python, Go, or Java In-depth understanding of operating systems, networking, and cloud services Experience with monitoring tools (for example, Datadog, ELK, Redash) Proven experience in managing large-scale distributed systems and understanding the principles of scalability and reliability Familiarity with DevOps culture and practices, and experience with CI/CD systems Excellent diagnostic and problem-solving skills, with the ability to analyze complex systems and data Certifications in cloud services, networking, or systems administration - Advantage What’s in it for you?! Our people are the foundation of our success, and we prioritize offering a wide range of benefits that make our team happier and healthier. Equity participation - everyone shares in our success Hybrid work Opportunities for professional growth Team fun & company outings Statutory benefits and leave benefits Health Insurance coverage Our Values: We look for people who embody our values - Care, Do, Try & Shine. Care - We care about our customers and each other Do - We do what it takes to make a positive impact Try - We try our best and we don’t give up Shine - We shine and make it our mission to always stand out We’re committed to creating a culture of inclusion and equality – across race, gender, sexuality, and disability – where innovation and growth thrive, every voice is heard, and everybody belongs. Learn more about us here. CPRA Employee Privacy Notice: CA Must be able to exercise independent judgment with little or no oversight. BigID is an E-Verify Participant. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
At Litmus7 we protect and support our customers across time zone so this profile demands candidate to work 24x7. Must have knowledge of Production Application Support. Good to have e-commerce background. Must have solid level2 support experience in eCommerce platforms. Knowledge of Blue Yonder OMS or any other OMS platform is mandatory. Hands on experience in Monitoring, Logging, Alerting, Dashboarding, and report generation in any monitoring tools such as AppDynamics/Splunk/Dynatrace/Datadog/CloudWatch/ELK/Prome/NewRelic). This engagement is for a customer using NewRelic, PagerDuty hence it is good to have this expertise. Must have knowledge in ITIL framework, specifically on Alerts, Incident, change management, CAB, Production deployments, Risk and mitigation plan. Should be able to lead P1 calls, brief about the P1 to customer, proactive in gathering leads/ customers into the P1 calls till RCA. Experience working with postman. Should have knowledge of building and executing SOP, runbooks, handling any ITSM platforms (JIRA/ServiceNow/BMC Remedy). Should know how to work with the Dev team, cross functional teams across time zones. Should be able to generate WSR/MSR by extracting the tickets from ITSM platforms. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What we are looking for Role: Performance Testing Experience Range: 4- 8 Years Location: Chennai/Pune Must Have: Good experience using Performance Test tool LoadRunner and understanding of APM tools like AppDynamics/Dynatrace/New Relic, etc. Good hands-on experience in Web-HTTP, Java Vuser, Webservice protocol. Should have ability to work independently in Requirement analysis, designing, execution & result analysis phase. Develop customized codes in Java & C language for optimizing and enhancing VuGen scripts. Analyze test results and coordinate with development teams for issue triaging & bug fixes. Technical Skills · LoadRunner -HTTP/HTML/Webservices protocol/Java Protocol, . Monitoring Tools: AppDynamics/ Dynatrace/ CloudWatch/ Splunk/ Kibana/ Grafana/ Datadog Show more Show less
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
We are looking for a Senior Site Reliability Engineer to join Okta s Workflows SRE team which is part of our Emerging Products Group (EPG). Okta Workflows is the foundation for secure integration between cloud services. By harnessing the power of the cloud, Okta allows people to quickly integrate different services, while still enforcing strong security policies. With Okta Workflows, organizations can implement no-code or low-code workflows quickly, easily, at a large scale, and low total cost. Thousands of customers trust Okta Workflows to help their organizations work faster, boost revenue, and stay secure. If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, If you have to do something more than once, automate it and who can rapidly self-educate on new concepts and tools. What you ll be doing? Designing, building, running, and monitoring Okta Workflows and other EPG products global production infrastructure. Lead and implement secure, scalable Kubernetes clusters across multiple environments. Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure. Responding to production incidents and determining how we can prevent them in the future. Triaging and troubleshooting complex production issues to ensure reliability and performance. Enhance automation workflows for patching, vulnerability assessments, and incident response. Continuously evolving our monitoring tools and platform. Promoting and applying best practices for building scalable and reliable services across engineering. Developing and maintaining technical documentation, runbooks, and procedures. Supporting a highly available and large scale Kubernetes and AWS environment as part of an on-call rotation. Be a technical SME for a team that designs and builds Okta's production infrastructure, focusing on security at scale in the cloud. What you ll bring to the role? Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience with Kubernetes deployments in either AWS and/or GCP Cloud environments. Have an understanding and familiarity with configuration management tools like Chef, Terraform, or Ansible. Have expert-level abilities in operational tooling languages such as Go and shell, and use of source control. Have knowledge of various types of data stores, particularly PostgreSQL, Redis, and OpenSearch. Experience with industry-standard security tools like Nessus and OSQuery. Have knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Skilled in using Datadog for real-time monitoring and proactive incident detection. Strong ability to collaborate with cross-functional teams and promote a security first culture. Experience in the following 5+ years of experience running and managing complex AWS or other cloud networking infrastructure resources including architecture, security and scalability. 5+ years of experience with Ansible, Chef, and/or Terraform 3+ years of experience in cloud security, including IAM (Identity and Access Management) and/or secure identity management for cloud platforms and Kubernetes. 3+ years of experience in automating CI/CD pipelines using tools such as Spinnaker, or ArgoCD with an emphasis on integrating security throughout the process. Proven experience in implementing monitoring and observability solutions such as Datadog or Splunk to enhance security and detect incidents in real-time. Strong leadership and collaboration skills with experience working cross-functionally with security engineers and developers to enforce security best practices and policies. Strong Linux understanding and experience. Strong security background and knowledge. BS In computer science (or equivalent experience).
Posted 1 week ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Key Responsibilities: Cloud-Based Development: Design, develop, and deploy scalable solutions using AWS services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Data Processing & Pipelines: Implement efficient data pipelines and optimize data processing using pandas, Spark, and PySpark. Machine Learning Operations (MLOps): Work with model training, model registry, model deployment, and monitoring using AWS SageMaker and related services. Infrastructure-as-Code (IaC): Develop and manage AWS infrastructure using AWS CDK and CloudFormation to enable automated deployments. CI/CD Automation: Set up and maintain CI/CD pipelines using GitHub, AWS CodePipeline, and CodeBuild for streamlined development workflows. Logging & Monitoring: Implement robust monitoring and logging solutions using Splunk, DataDog, and AWS CloudWatch to ensure system performance and reliability. Code Optimization & Best Practices: Write high-quality, scalable, and maintainable Python code while adhering to software engineering best practices. Collaboration & Mentorship: Work closely with cross-functional teams, providing technical guidance and mentorship to junior developers. Qualifications & Requirements 7+ years of experience in software development with a strong focus on Python. Expertise in AWS services, including S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Proficiency in Infrastructure-as-Code (IaC) tools like AWS CDK and CloudFormation. Experience with data processing frameworks such as pandas, Spark, and PySpark. Understanding of machine learning concepts, including model training, deployment, and monitoring. Hands-on experience with CI/CD tools such as GitHub, CodePipeline, and CodeBuild. Proficiency in monitoring and logging tools like Splunk and DataDog. Strong problem-solving skills, analytical thinking, and the ability to work in a fast-paced, collaborative environment. Preferred Skills & Certifications AWS Certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, AWS Certified Machine Learning). Experience with containerization (Docker, Kubernetes) and serverless architectures. Familiarity with big data technologies such as Apache Kafka, Hadoop, or AWS EMR. Strong understanding of distributed computing and scalable architectures. Skills Python,MLOps, AWS Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
New Delhi, Delhi, India
Remote
NO FRESHER WILL BE PREFERRED HIRING We are looking for Senior AEM Admin as an Employee for our IT Company Location- Remote Experience- 6+ years Budget- As per market standards Notice Period- Immediate Joiner Job Description: We are looking for a highly skilled AEM DevOps Admin with 6+ years of experience to manage, deploy, and troubleshoot Adobe Experience Manager (AEM) environments. The candidate will play a key role in automating DevOps processes, ensuring high availability, performance, and security of AEM applications. Key Responsibilities: - Manage and maintain Adobe Experience Manager (AEM) instances, ensuring performance, scalability, and security. - Develop and maintain CI/CD pipelines for AEM projects using Jenkins, Git, Docker, etc. - Manage cloud/on-prem infrastructure (AWS, Azure, GCP) for AEM environments. - Monitor AEM performance and system health using tools like New Relic, Splunk. - Implement security practices, including patch management, SSL, and firewalls. - Collaborate with development, QA, and business teams for smooth deployments. - Lead incident management, troubleshooting, and root cause analysis for AEM issues. Key Skills: - Strong knowledge of Adobe Experience Manager (AEM) administration, configuration, and deployment. - Hands-on experience with DevOps tools like Jenkins, Git, Docker, Ansible, Terraform, Kubernetes. - Proficiency with AWS, Azure, or Google Cloud platforms. - Strong scripting skills (Bash, Python, Groovy) for automation. - Familiarity with monitoring and logging tools (e.g., Splunk, New Relic, Datadog). - Security management experience with SSL, patching, and firewalls. - Excellent troubleshooting and problem-solving skills. Qualifications: - Bachelor’s degree in Computer Science, Information Technology, or a related field. - 6+ years of experience in AEM administration and DevOps. - Certification in Adobe AEM or Cloud platforms is a plus. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What's the role? As a Backend Developer at Hilti, you will play a pivotal role in enhancing the company’s software offerings. You are expected to translate business requirements into technical applications, create and maintain intricate and innovative solutions, and ensure adherence to coding standards. Additionally, you will coach other software developers in a cross-functional environment and review their contributions Who is Hilti? Hilti is where innovation is improving productivity, safety and sustainability in the global construction industry, and beyond. Where strong customer relationships are creating solutions that build a better future. Where there is pride and a sense of belonging across our 120 locations, carrying right into our lives and homes. Where people are exploring possibilities, leveraging their potential, owning their personal development and growing lasting careers. What does the role involve? Actively participate in product feature and design discussions and help shape the future of Hilti SW solutions. High/Low Level Design – designing feature level solution, REST API Contract, Event Contracts as per pre-defined guidelines and specifications. Write high quality, maintainable production grade code with required parametrized automated tests, as per the defined Architecture and Designs. Collaborate actively with Hilti’s product owners and architects to translate jointly defined requirements into working software. Ownership & management of one or more microservices and associated Technical Delivery, maintaining a high bar on quality within the agreed timelines. Present solutions to both technical leadership and other software engineering teams. Perform code review and enforce coding standards. Establish libraries of reusable components for applications. Mentor developers in their day-to-day tasks and build their professional development. Hilti has been cited World’s Best Workplaces for . The accolades are given by Great Place to Work (GPTW), the worldwide industry leader in quantifying employee experience. What do we offer? Your responsibilities will be great and, with them, we’ll give you the freedom and autonomy to do whatever it takes to deliver outstanding results. We’ll offer you opportunities to move around the business – you will get global exposure, experience different job functions and tackle different markets. It’s a great way to find the right match for your ambitions and achieve the exciting career you’re after. We have a very thorough people review process which enables your career progression as soon as you’re ready for the next challenge. What You Need Is Hands on experience in designing and developing server-side applications. Proficiency in Java 11+/Kotlin language, REST and GraphQL APIs. Strong experience of building APIs using Microservice architecture. Excellent knowledge of Relational & NO-SQL databases and ORM technologies (JPA2, Hibernate). Experience in Spring Boot and web applications development using at least one popular web framework (JSF, Wicket, GWT, Spring MVC). Experience of writing unit test(Junit/Mockito) and component test. Experience of working on cloud environment (Eg. AWS), familiarity with CI/CD practices, Docker and Kubernetes and monitoring tools (Eg. Grafana, Glowroot, DataDog). Knowledge of functional programming, design patterns, and software designing and architecture best practices. Bonus points for experience with Kafka. Requirements Bachelor’s or Master’s Degree in Computer Science, Information Technology, or related field. Minimum seven years of professional experience, with at least four years as a backend developer Extensive experience in coaching and mentoring. Strong analytical, conceptual and problem-solving skills. Willingness to embrace change & new technologies. Team player with excellent communication skills in an agile, interdisciplinary, and international environment Strong communication skills with proficiency in English Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role: Performance Engineering(LoadRunner/JMeter) Experience Range: 4 – 8 Years (ONLY)!! Mode of Interview : Walk-in Interview Date of Interview : 14th June 25 (Saturday) Location:- Chennai - Tata Consultancy Services Ltd, ATL Building, Sipcot Information Technology Park, Navalur Post, Siruseri, Chennai – 603103 Must Have: Good experience using Performance Test tool LoadRunner and understanding of APM tools like AppDynamics/Dynatrace/New Relic, etc. Good hands-on experience in Web-HTTP, Java Vuser, Webservice protocol. Should have ability to work independently in Requirement analysis, designing, execution & result analysis phase. Develop customized codes in Java & C language for optimizing and enhancing VuGen scripts. Analyze test results and coordinate with development teams for issue triaging & bug fixes. Good understanding of different OS internals, file systems, disk/storage, networking protocols and other latest technologies like Cloud infra.· Monitor/extract production performance statistics and apply the same model in the test environments with higher load to uncover performance issues. Must have experience in monitoring DB and highlight performance issues. Good to have experience working on Finance – Banking domain projects. Technical Skills · LoadRunner -HTTP/HTML/Webservices protocol/Java Protocol, . Monitoring Tools: AppDynamics/ Dynatrace/ CloudWatch/ Splunk/ Kibana/ Grafana/ Datadog Database – SQL or Mongo Unix basics Good understanding of cloud concepts – AWS/Azure Good to Have: Minimum 2 mandate details are mandate with two or 3 liners 1.Java 2. Performance Engineering/ Tuning Minimum Qualification: •15 years of full-time education •Minimum percentile of 50% in 10th, 12th, UG & PG (if applicable) Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.
These cities are known for their thriving tech industries and are actively hiring for Datadog roles.
The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.
In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.
With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.