Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Arcgate is a dynamic and rapidly growing team of 2500+ professionals passionate about data and technology. We deliver cutting-edge solutions to some of the world’s most innovative startups to market leaders across application development, quality engineering, AI data preparation, data enrichment, search relevance, and content moderation. Responsibilities Design, build, and optimize Python based data pipelines that handle large, complex, and messy datasets efficiently. Develop and manage scalable data infrastructures, including databases and data warehouses such as Snowflake, Azure Data Factory etc. ensuring reliability and performance. Build, maintain, and optimize CDC processes that integrate data from multiple sources into the data warehouse. Collaborate closely with data scientists, analysts, and operations teams to gather requirements and deliver high-quality data solutions. Perform data quality checks, validation, and verification to ensure data integrity and consistency. Support and optimize data flows, ingestion, transformation, and publishing across various systems. Work with AWS infrastructure (ECS, RDS, S3), manage deployments using Docker, and package services into containers. Use tools like Prefect, Dagster and dbt to orchestrate and transform data workflows. Implement CI/CD pipelines using Harness and GitHub Actions. Monitor system health and performance using DataDog. Manage infrastructure orchestration with Terraform and Terragrunt. Stay current with industry trends, emerging tools, and best practices in data engineering. Coach and mentor junior team members, promoting best practices and skill development. Contribute across diverse projects, demonstrating flexibility and adaptability. Qualifications Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics, or a related field. 5+ years of demonstrable experience building reliable, scalable data pipelines in production environments. Strong experience with Python, SQL programming, and data architecture. Hands-on experience with data modeling in Data Lake or Data Warehouse environments (Snowflake preferred). Familiarity with Prefect, Dagster, dbt, and ETL/ELT pipeline frameworks. Experience with AWS services (ECS, RDS, S3) and containerization using Docker. Knowledge of TypeScript, React, Node.js is a plus for collaborating on the application platform. Strong command of GitHub for source control and Jira for change management. Strong analytical and problem-solving skills, with a hands-on mindset for wrangling data and solving complex challenges. Excellent communication and collaboration skills; ability to work effectively with cross- functional teams. A proactive, start-up mindset, adaptable, ambitious, responsible, and ready to contribute wherever needed. Passion for delivering high-quality solutions with meticulous attention to detail. Enjoy working in an inclusive, respectful, and highly collaborative environment where every voice matters. Benefits Competitive salary package. Opportunities for growth, learning, and professional development. Dynamic, collaborative, and innovation-driven work culture. Work with cutting-edge technologies and leading-edge startups. Excited to turn data into insights and bring your expertise to the next level? Click the Apply button below and become an Arcgatian! Apply Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Provide end user support, as part of a 24x7 rota, for our observability tools, namely AppDynamics, Splunk & DataDog and for our wider WPB development community Support the full system engineering life cycle, including requirements analysis, design, development, integration, test, documentation, implementation following defined best practices and operational workflows for both on-premises and AWS infrastructure. As an expert in Splunk, AppDynamics and Cloud make sure to question current solutions and always think about ways to improve our codebase Edit and maintain configuration files associated to the supporting tooling. Drive and own your infra mentality, upskillng yourself & those around you where needed and using data to drive decisions Drive deliveries forwards whilst ensuring effective partnering between colleagues and stakeholders Create and maintain documentation for the various services and processes we support. Ensure good governance, timely/accurate reporting and management of epics, stories & risks Setup and/or manage multiple work streams depending on their size and complexity Play a crucial part of a predominantly virtual team of Infra DevOps engineers. Perform code (peer) reviews for other team members. Use it as an opportunity to encourage good practices in the team. Our team is always open to new ideas! Try to find ways to improve our day to day work with new automations or tools of your invention. Requirements To be successful in this role, you should meet the following requirements: Only candidates with 6+yrs to apply. Comfortable working in a multi-cultured/global environment. Have experience supporting and building services in the public cloud (ideally AWS) Experience of deploying infrastructure as code via Terraform In-depth understanding of version control software via the likes of GitHub Scripting experience (Python / Bash etc.) Knowledge of Splunk, AppDynamics & DataDog from an administrative perspective Software deployment experience by way of continuous integration/continuous delivery (CICD) Pipelines Proven problem solver who can work on their own or inclusively taking ownership when required. Able to take a hands on/off view when managing problems. An ability to translate technical details into easily understood and consumable data/reports. Have an automation/change mentality and strive for constant improvements via automated processes Have a keen interest in the latest technology and an eagerness to learn You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the worlds toughest diseases, and make peoples lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 45 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond whats known today. ABOUT THE ROLE Role Description We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer with deep expertise of R&D domain in life sciences to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domin Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Primary Skills Strong hands-on experience with observability tools like AppDynamics, Dynatrace, Prometheus, Grafana, and ELK Stack Proficient in AppDynamics setup, including installation, configuration, monitor creation, and integration with ServiceNow, email, and Teams Ability to design and implement monitoring solutions for logs, traces, telemetry, and KPIs Skilled in creating dashboards and alerts for application and infrastructure monitoring Experience with AppDynamics features such as NPM, RUM, and synthetic monitoring Familiarity with AWS and Kubernetes, especially in the context of observability Scripting knowledge in Python or Bash for automation and tool integration Understanding of ITIL processes and APM support activities Good grasp of non-functional requirements like performance, capacity, and security Secondary Skills AppDynamics Performance Analyst or Implementation Professional certification Experience with other APM tools like New Relic, Datadog, or Splunk Exposure to CI/CD pipelines and integration of monitoring into DevOps workflows Familiarity with infrastructure-as-code tools like Terraform or Ansible Understanding of network protocols and troubleshooting techniques Experience in performance tuning and capacity planning Knowledge of compliance and audit requirements related to monitoring and logging Ability to work in Agile/Scrum environments and contribute to sprint planning from an observability perspective
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Description Our team is responsible for the core User model for Zendesk. The team’s mission is to optimize the in-product, customer and developer experience for representing our customers’ Users at Zendesk. This role will be joining our team based out of Melbourne Australia as our foundational engineers in Pune. Our ideal candidate is someone who takes the lead on anything they work on and is a self-starter with projects that has the initiative to drive a project from start to finish. If you are passionate about working on applications at scale, with immediate customer impact, our friendly and supportive team is for you. Note: This is a hybrid role, combining remote and on-site work, requiring 3 days in the office, and relocation to Pune. What You’ll Get To Do Each Day Contribute to the code at the core of Zendesk’s support product - what you write will reach millions of people each day! Take ownership of features and collaborate with the Tech Lead to design and implement complete solutions. Contribute to technical discussions and decision-making with other teams and partners across the global engineering organisation. Break complex features into granular pieces of work, to facilitate incremental feedback cycles. Prioritise and estimate work balancing feature delivery with the management of tech debt. Participate in and lead activities such as pairing sessions and code reviews to facilitate continuous self-improvement for the whole team. Work closely with our designers and product managers to help define the future of the product. Triage customer issues in partnership with Product Managers. Provide operational support for our services in Production. Actively look for ways to improve the observability, performance, reliability and security of our services. What You Bring To The Role Strong proficiency in Ruby on Rails. You’ll be working on one of the largest Ruby on Rails codebases in the world! Able to drive technical decision making and collaborate with other engineers and product managers as stakeholders on the decision. Able to independently break down work into manageable tasks, and sequence dependencies between them. Solid experience with MySQL or similar. A knack for writing efficient queries and optimizing performance in high-traffic environments. A track record of delivering large-scale, high quality, and resilient web systems. Strong verbal, written, and interpersonal communication skills in English - you’ll collaborate with our other product teams around the globe. The ability to understand and communicate sophisticated concepts in a relevant and considerate manner. Explain and reason your technical decisions clearly and effectively to engineers of different levels and non-technical partners. Experience solving difficult problems across multiple systems. Experience coaching engineers, leading brainstorming discussions, and facilitating engineers working together to make decisions in a collaborative environment. Ability to influence without authority, inspire and mentor others. A culture of learning, growth, and innovation. You’re comfortable jumping into unfamiliar codebases and languages. Tech Stack We primarily work in Ruby on Rails. Most of the team’s data is stored in MySQL and Redis. We occasionally work in our frontend in React, being migrated from Ember, preferring new components to be written in TypeScript. We occasionally work in other adjacent services in Go and Java. Our services connect with other services via a combination of gRPC, REST APIs, Kafka event streams, and GraphQL. Our services are deployed to Kubernetes using Docker via Spinnaker, running on AWS. We don’t require previous experience with these specific technologies; we’re confident you can learn as we go. We monitor and observe our production systems using Datadog. About Zendesk's Product Development Center Of Excellence In Pune Zendesk is in the exciting early stages of establishing a Product Development Center of Excellence in Pune. This center is being developed through a BOT (Build-Operate-Transfer) model, allowing us to gradually build and scale our operations with a current mix of BOT workers and full-time employees (FTEs). Our vision is to create a vibrant, innovative hub where all team members will eventually transition into FTE roles. As an early hire, you will have a unique opportunity to be a pivotal part of this journey. You'll play a key role in shaping the culture, processes, and success of our Pune site, contributing directly to the growth and maturity of our Product Development Center of Excellence. Your expertise and insights will help lay the foundation for a world-class development center, influencing the way we build and deliver products at Zendesk. Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request. Show more Show less
Posted 1 week ago
10.0 - 15.0 years
25 - 30 Lacs
Chandigarh, Bangalore Rural, Bengaluru
Work from Office
Performance Testing,JMeter,custom coding,Java/Python,load,stress,endurance,stability, resiliency tests,Test Planning,Test Strategy,workload model design,Test Environment Setup,Test Data Setup,Defect Management,APM tools like AppDynamics, DataDog
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Senior Engineer, Site Reliability Engineering (GG11) – Real Time We are hiring Site Reliability Engineers (SREs) to expand a team supporting LSEG’s Real Time application space which provides thousands of clients access to trading and market data globally. This team efficiently mitigates operational risk allowing for fast product innovation to improve customer experience. You will own operational support and automate away operational toil in our existing and new platform, create dynamic and reusable observability components, and cultivate subject matter expertise. You will be part of a diverse, globally distributed team providing “follow the sun” support, and will have the opportunity to hone skills in a fast-paced environment that processes real-time data. Key Responsibilities: Direct the recovery of incidents, analyse the facts with speed, perform troubleshooting activities, and drive actions through others via incident recovery meetings. Continuously improve operations processes via toil automation. Ensure that all alerts are actionable, and where unnecessary alerts are found, drive their removal in coordination with development teams. Maintain a pristine production environment by delivering changes safely to production and ensuring changing deployments occur only after thorough assessment and adequate understanding of risks. Write informative incident reports and champion the interests of customer in the post-incident review discussions. Review and approve customer statements. As a subject matter expert, develop sound knowledge of application dataflows, networking topologies, and maintain troubleshooting knowledge bases. Closely collaborate with application development teams to ensure iterative improvements to production environments are prioritized appropriately in the product backlog. Act as the SRE front door for intake and delivery of new projects. Work with project delivery squads to ensure the produced project artifacts (architectural diagrams, runbooks, and other documentation) are high-quality and thoroughly reviewed, ensure application designs meet robust supportability standards, perform and signoff operational acceptance testing, and lead Game Day activities. Technical qualifications: Extensive UNIX admin and scripting experience Advanced grasp of networking concepts like TCP/IP, HTTP, and DNS resolution Strong grasp of version control (Git) Experience in troubleshooting large distributed systems Experience in configuration and usage of Kubernetes, Docker, and container-based development and applications Proven experience using Python for process automation and tooling integration via APIs Practical experience supporting cloud-native applications (AWS or Azure preferred) Expertise in working with and maintaining observability tooling (DataDog and BigPanda experience is highly desirable) A bachelor’s degree in computer science or related technical field involving software/systems engineering, or equivalent practical experience Minimum 8 years of work experience in the industry Does this sound like a challenge you're interested in taking on? Join us! LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk DevOps Engineers are coders who enjoy a challenge and will be responsible for automating and streamlining our operations and processes, building and maintaining tools for deployment, monitoring, and operations, and troubleshooting and resolving issues in our dev, test, and production environments. As a DevOps Engineer, you will partner closely with software engineers, QA, and product teams to design and implement robust CI/CD pipelines , define infrastructure through code, and create tools that empower developers to ship high-quality features faster. You’ll actively contribute to cloud-native development practices , introduce automation wherever possible, and champion a culture of continuous improvement, observability, and developer experience (DX) . Your day-to-day work will involve a mix of platform/DevOps engineering , build/release automation , Kubernetes orchestration , infrastructure provisioning , and monitoring/alerting strategy development . You will also help enforce secure coding and deployment standards, contribute to runbooks and incident response procedures, and help scale systems to support rapid product growth. This is a hands-on technical role that requires strong coding ability, cloud architecture experience, and a mindset that thrives on collaboration, ownership, and resilience engineering . Qualifications Collaborate with developers to ensure seamless CI/CD workflows using tools like GitHub Actions, Jenkins CI/CD, and GitOps Write automation and deployment scripts in Groovy, Python, Go, Bash, PowerShell or similar Implement and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation Build and manage containerized applications using Docker and orchestrate using Kubernetes (EKS, AKS, GKE) Manage and optimize cloud infrastructure on AWS Implement automated security and compliance checks using the latest security scanning tools like Snyk, Checkmarx, and Codacy. Develop and maintain monitoring, alerting, and logging systems using Datadog, Prometheus, Grafana, Datadog, ELK, or Loki Drive observability and SLO/SLA adoption across services Support development teams in debugging, environment management, and rollout strategies (blue/green, canary deployments) Contribute to code reviews and build automation libraries for internal tooling and shared platforms Additional Information Requirements: 5-8 years of experience focused on DevOps Engineering, Cloud administration, or platform engineering, and application development Strong hands-on experience in: Linux/Unix and Windows OS Network architecture and security configurations Hands-on experience with the following scripting technologies: Automation/Configuration management using either Ansible, Puppet, Chef, or an equivalent Python, Ruby, Bash, PowerShell Hands-on experience with IAC (Infrastructure as code) like Terraform, CloudFormation Hands-on experience with Cloud infrastructure such as AWS, Azure, GCP Excellent communication skills, and strong attention to detail Strong hands-on technical abilities Strong computer literacy and/or the comfort, ability, and desire to advance technically Strong understanding of Information Security in various environments Demonstrated ability to assume sole and independent responsibilities Ability to keep track of numerous detail-intensive, interdependent tasks and ensure their accurate completion Preferred Tools & Technologies: Languages: Python, Go, Bash, YAML, PowerShell Version Control & CI/CD: Git, GitHub Actions, GitLab CI, Jenkins, GitOps IaC: Terraform, CloudFormation Containers: Docker, Kubernetes, Helm Monitoring & Logging: Datadog, Prometheus, Grafana, ELK/EFK Stack Cloud Platforms: AWS (EC2, ECS, EKS, Lambda, S3, Newtorking/VPC, cost optimization) Security: HashiCorp Vault, Trivy, Aqua, OPA/Gatekeeper Databases & Caches: PostgreSQL, MySQL, Redis, MongoDB Others: NGINX, Istio, Consul, Kafka, Redis Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 8 years Location: Bengaluru JobType: full-time Requirements As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations Define clear problem statements and technical requirements by aligning business goals with AI research objectives Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes) Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput Conduct regular performance tuning and cost analysis to maintain operational efficiency Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals Required Qualification Bachelor's or Master's degree in Computer Science, Engineering, or a related field 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems Proven track record of architecting and deploying production AI applications at scale Strong programming skills in Python and one or more of Java, Go, or C++ Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering Expertise in CI/CD, automated testing frameworks, and MLOps best practices Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar) Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana) Publications or patents in AI/ML or related conference presentations Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML) Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability What we offer? Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth Competitive compensation, comprehensive benefits, and equity options Flexible work arrangements and support for professional development Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
SDET -Cypress -Automation Engineers Roles & Responsibilities Test Rich UI applications and implement Test Automation for it. Automate Test cases using Cypress. Create test cases, test plans and define test strategies. Also automate api using Python . Skills 4+ years of experience in automation and manual testing. Good Understanding of web application testing and API testing. Experience in Cypress or Playwright framework using JavaScript Exposure to Python language and automation using any framework with Python. In-depth knowledge of a variety of testing techniques and methodologies Experience setting up a CI solution using Github actions Understanding of Docker containers Jmeter or Load testing experience, in general, using another load testing tool. API Automation Experience. Experience in using Postman Expertise in agile testing methodology and ALM tools such as JIRA. Excellent organization, communication, and interpersonal skills Strong analytical and problem-solving skills with the ability to work in an unstructured, fast-paced environment. Preferred Skills Experience with Datadog or any other similar tool. Page performance Web page test API scripting (or similar tool. Ability to write small test scripts as needed. (python or Curl experience.) Any AI automation creation or UI approach for manual testers to help create the automation tests. Experience with working with manual teams to debug scripts, train manual engineers to run the automation, conduct reviews of automation being run, and make recommendations to project teams. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About The Role Are you a detail-oriented problem solver with a keen analytical mind? Do you enjoy unraveling complex systems and ensuring software runs flawlessly? If so, we have an exciting challenge for you! We are looking for a QA Engineer to join our team and play a key role in maintaining the quality and performance of our Driivz products-a leading platform in the electric vehicle charging ecosystem. As part of our team, you'll be responsible for identifying issues, ensuring seamless functionality, and helping to shape the future of sustainable mobility. Your contribution Respond promptly to customer inquiries in different communication channels e.g. ticketing system, calls, etc. Understand and troubleshoot all reported bugs and incidents and provide feedback to the customer and work closely with Driivz internal teams (R&D, Product, CSM) Escalate issues in a timely manner to a higher support level when needed Maintain a positive and professional attitude towards clients Learn our product inside out to address technical issues in a timely and professional manner. Must have Professional working proficiency in English (Required). Working knowledge of Linux OS. Experience in Cloud-Based Services (e.g. AWS, GCP). Knowledge and previous experience in SQL Experience in supporting remote devices (e.g. network access and configuration, device setup, work models, etc.); Experience in reproducing customers’ issues and leading debug sessions with customers or R&D. Proficiency on Monitoring Tools. Ex – DataDog, Kibana, Prometheus or any other. Work experience in customer support in the tech industry (min. 3 years) Experience working with offshore teams (min. 2 years) Considered an advantage Bachelor’s degree in Computer Science or Engineering Knowledge and previous experience in Zendesk Ticketing system Who Is Gilbarco Veeder-root Gilbarco Veeder-Root, a Vontier company, is the worldwide technology leader for retail and commercial fueling operations, offering the broadest range of integrated solutions from the forecourt to the convenience store and head office. For over 150 years, Gilbarco has earned the trust of its customers by providing long-term partnership, uncompromising support, and proven reliability. Major product lines include fuel dispensers, tank gauges and fleet management systems. Who Is Vontier Vontier (NYSE: VNT) is a global industrial technology company uniting productivity, automation and multi-energy technologies to meet the needs of a rapidly evolving, more connected mobility ecosystem. Leveraging leading market positions, decades of domain expertise and unparalleled portfolio breadth, Vontier enables the way the world moves – delivering smart, safe and sustainable solutions to our customers and the planet. Vontier has a culture of continuous improvement and innovation built upon the foundation of the Vontier Business System and embraced by colleagues worldwide. Additional information about Vontier is available on the Company’s website at www.vontier.com. At Vontier, we empower you to steer your career in the direction of success with a dynamic, innovative, and inclusive environment. Our commitment to personal growth, work-life balance, and collaboration fuels a culture where your contributions drive meaningful change. We provide the roadmap for continuous learning, allowing creativity to flourish and ideas to accelerate into impactful solutions that contribute to a sustainable future. Join our community of passionate people who work together to navigate challenges and seize opportunities. At Vontier, you are not on this journey alone-we are dedicated to equipping you with the tools and support needed to fuel your innovation, lead with impact, and thrive both personally and professionally. Together, let’s enable the way the world moves! Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role:Performance Monitoring (Loadmeter/Jmeter) Experience Range: 4 – 8 Years Location: chennai/Pune/Hyderabad candidates should come to office for Walk in Drive(Face to face Interview) Weekend Walk-in Drive: 14-June-25 (Saturday) Timing: 9:30AM to 12:30PM Must-Have Good experience using Performance Test tool LoadRunner and understanding of APM tools like AppDynamics/Dynatrace/New Relic, etc Good hands-on experience in Web-HTTP, Java Vuser, Webservice protocol Should have ability to work independently in Requirement analysis, designing, execution & result analysis phase. Develop customized codes in Java & C language for optimizing and enhancing VuGen scripts. Analyze test results and coordinate with development teams for issue triaging & bug fixes. Good understanding of different OS internals, file systems, disk/storage, networking protocols and other latest technologies like Cloud infra.· Monitor/extract production performance statistics and apply the same model in the test environments with higher load to uncover performance issues. Must have experience in monitoring DB and highlight performance issues. Good to have experience working on Finance – Banking domain projects. Technical Skills · LoadRunner -HTTP/HTML/Webservices protocol/Java Protocol, . Monitoring Tools: AppDynamics/ Dynatrace/ CloudWatch/ Splunk/ Kibana/ Grafana/ Datadog Database – SQL or Mongo Unix basics Good understanding of cloud concepts – AWS/Azure Interested candidates pls share your cv to mailid: c.nayana@tcs.com with subject " Performance Monitoring (Loadmeter/Jmeter)" for further Discussion Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurgaon
On-site
Performance Quality Engineer Gurgaon, India; Hyderabad, India Information Technology 316150 Job Description About The Role: Grade Level (for internal use): 09 Role: Performance Quality Engineer The Team Quality Engineering team works in partnership with other functions in Technology & the business to deliver quality products by providing software testing services and quality assurance, that continuously improve our customer’s ability to succeed. The team is independent in driving all decisions and is responsible for the architecture, design and quick turnaround in development of our products with high quality. The team is located globally. The Impact You will ensure the quality of our deliverable meets and exceeds the expectations of all stakeholders and evangelize the established quality standards and processes. Your challenge will be reducing the “the time to market” for products without compromising the quality, by leveraging technology and innovation. These products are directly associated to revenue growth and operations enablement. You strive to achieve personal objectives and contribute to the achievement of team objectives, by working on problems of varying scope where analysis of situations and/or data requires a review of a variety of factors. What’s in it for you Do you love working every single day testing enterprise-scale applications that serve a large customer base with growing demand and usage? Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company’s strategy. You will use a wide range of technologies and have the opportunity to interact with different teams internally. You will also get a plenty of learning and skill-building opportunities with participation in innovation projects, training and knowledge sharing. You will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities Understand application architecture, system environments (ex: shared resources, components and services, CPU, memory, storage, network, etc.) to troubleshoot production performance issues. Ability to perform scalability & capacity planning . Work with multiple product teams to design, create, execute, and analyze performance tests; and recommend performance turning. Support remediating performance bottlenecks of application front-end and database layers. Drive industry best practices in methodologies and standards of performance engineering, quality and CI/CD process. Understand user behaviors and analytics models and experience in using Kibana and Google analytics Ensure optimally performing production applications by establishing application and transaction SLAs for performance, implementing proactive application monitoring, alarming and reporting, and ensuring adherence to and measurement against defined SLA. Analyzes, designs and develops performance specifications and scripts based on workflows. Ability to interpret Network/system diagram, results of performance tests and identify improvements. Leverage tools and frameworks to develop performance scripts with quality code to simplify testing scenarios. Focus on building efficient solutions for Web, Services/APIs, Database, mobile performance testing requirements. Deliver projects in the performance testing space and ensure delivery efficiency. Define testing methodologies & implement tooling best practices for continuous improvement and efficiency. Understand business scenarios in depth to define workload modelling for different scenarios. Compliment architecture community by providing inputs & pursue implementation suggested for optimization. Competency to manage testing for highly integrated system with multiple dependencies and moving parts. Active co-operation/collaboration with the teams at various geographic locations. Provide prompt response and support in resolving critical issues (along with the development team). May require after hours/weekend work for production implementations. What we’re looking for: Bachelor's/PG degree in Computer Science, Information Systems or equivalent. 3-5 years of experience in Performance testing/Engineering or development with good understanding of performance testing concepts. Experience in performance testing tools like Microfocus Storm Runner/ LoadRunner/Performance Center, JMeter. Protocol : Web(HTTP/HTML) , Ajax Truclient, Citrix, .Net Programming Language : Java, C#, .Net, Python Working Experience in CI/CD for performance testing. Debugging tools: Dev Tools, Network Sniffer and Fiddler etc. Experience in monitoring, profiling and tuning tools e.g. CA Wily Introscope, AppDynamics, Dynatrace, Datadog, Splunk etc. Experience in gathering Non-Functional Requirements (NFR) & strategy to achieve NFR and developing test plans. Experience in testing and optimizing high volume web and batch-based transactional enterprise applications. Strong communication skills and ability to produce clear, concise and detailed documentation. Excellent problem solving, analytical and technical troubleshooting skills. Experience in refactoring test performance suites as necessary. Preferred Qualifications: Bachelor's or higher degree in technology related field. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316150 Posted On: 2025-06-04 Location: Gurgaon, Haryana, India
Posted 1 week ago
3.0 years
1 - 5 Lacs
Bengaluru
On-site
There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Implements infrastructure, configuration, and network as code for the applications and platforms in your remit Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on Site Reliability concepts and 3+ years applied experience Knowledge of one or more general-purpose programming languages or automation scripting (Python, UNIX shell scripting, etc.). Experience supporting public/private cloud-based applications. Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Intermediate understanding of one programming language such as Python, Java should be able to dive into code and recommend developers for performance optimization and error fixing. Knowledge of source code management tools like Git, Bitbucket, and CI/CD tools like Jenkins. Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation. Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team. Ability to initiate and implement ideas to solve business problems and shift left towards SRE. Preferred qualifications, capabilities, and skills * Familiarity with container and container orchestration such as Kubernetes, ECS
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru
On-site
JOB DESCRIPTION There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Corporate technology, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Write high-quality , maintainable, and well-tested software to develop reliable and repeatable solutions to complex problems. Collaborate with product development teams to design, implement and manage CI/CD pipelines to support reliable, scalable, and efficient software delivery. Partner with product development teams to capture and define meaningful service level indicators (SLIs) and service level objectives (SLOs). Develop and maintain monitoring, alerting, and tracing systems that provide comprehensive visibility into system health and performance. Contribute to design reviews to evaluate and strengthen architectural resilience, fault tolerance and scalability. Uphold incident response management best practices, champion blameless postmortems and continuous improvements. Debug, track, and resolve complex technical issues to maintain system integrity and performance. Champion and drive the adoption of reliability and resiliency best practices Supports the adoption of site reliability engineering best practices within your team Required qualifications, capabilities, and skills Formal training or certification on Site Reliability Engineer concepts and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and Go Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Solid understanding of networking concepts, including TCP/IP, routing, firewalls, and DNS. In-depth knowledge of Unix/Linux, including performance tuning, process and memory management, and filesystem operations. Ability to contribute to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team Preferred qualifications, capabilities, and skills Practical experience in building, supporting, and troubleshooting JVM-based applications, using tools like JConsole or VisualVM, and supporting SQL and in-memory database technologies. Experience working in the financial/fin-tech industry, with knowledge of performance and chaos testing tools such as Gremlin, Chaos Mesh, and LitmusChaos. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We’re looking for a Data Engineer II to join Procore’s Product & Technology Team. Procore software solutions aim to improve the lives of everyone in construction and the people within Product & Technology are the driving force behind our innovative, top-rated global platform. We’re a customer-centric group that encompasses engineering, product, product design and data, security and business systems. Data engineers are responsible for implementing critical projects including the design and operation of Procore's streaming and batching data processing pipelines and creating domain benchmarks and insights etc. We're looking for a motivated engineer with at least 2 years of experience. You must be comfortable operating in a high autonomy environment and deploying technologies that are new to our organization. drive solutions to wide-ranging data engineering and infrastructure challenges for product and internal operations. You will partner with world-class developers, engineers, architects, and data scientists to drive thinking, provide technical leadership, and collaborate in defining best practices around data engineering. You will also work alongside local product management, engineering, and research teams to develop innovative solutions that will influence our product line. Examples of our projects: An ETL pipeline for our data lake consisting of batch processing, orchestration with Airflow, monitoring with Datadog and alerting with Slack A Maven package used by all of Product Dev teams for building Kafka consumers with built in support for configuration, error reporting, monitoring, deserialization, gRPC, Spark, Flink, and Kubernetes A multi-stage data lake including landing, process and serving zone Some Of Your Responsibilities Include Partner with teams on modeling and analysis problems – from transforming problem statements into analysis problems, to working through data modeling and engineering, to analysis and communication of results Conduct code reviews, design, and best practices Use experience gained in the above and expertise in this space to influence our product roadmap, potentially working with prototype engineering team to add additional capabilities to our products to solve more of these problems Who You Are... 2+ years of experience in a Data/ML Engineer role Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field or equivalent relevant experience. Expertise building data pipelines (in Real-time and batch) on large complex datasets using Spark or Flink frameworks Experience with AWS services including EC2, S3, Glue, EMR, RDS, Snowflake, Elastic Search, Cassandra and Data pipeline/streaming tools (Airflow, NiFi, Kafka) Experience building and optimizing data pipelines, architectures and data sets. A successful history of manipulating, processing and extracting value from large disconnected datasets. Deep knowledge of stream processing using Kafka and highly scalable ‘big data’ data stores. Team Player. Experience supporting and working with cross-functional teams in a dynamic environment. Experience of End-to-end data quality control and automated testing experience Preferred Experience with unstructured data (PDF, contract, plan, image) Data transformation (quality, extraction) Experience in working within team handling all the data pipeline from extraction to Data warehouse Additional Information Perks & Benefits At Procore, we invest in our employees and provide a full range of benefits and perks to help you grow and thrive. From generous paid time off and healthcare coverage to career enrichment and development programs, learn more details about what we offer and how we empower you to be your best. About Us Procore Technologies is building the software that builds the world. We provide cloud-based construction management software that helps clients more efficiently build skyscrapers, hospitals, retail centers, airports, housing complexes, and more. At Procore, we have worked hard to create and maintain a culture where you can own your work and are encouraged and given resources to try new ideas. Check us out on Glassdoor to see what others are saying about working at Procore. We are an equal-opportunity employer and welcome builders of all backgrounds. We thrive in a dynamic and inclusive environment. We do not tolerate discrimination against candidates or employees on the basis of gender, sex, national origin, civil status, family status, sexual orientation, religion, age, disability, race, traveler community, status as a protected veteran or any other classification protected by law. If you'd like to stay in touch and be the first to hear about new roles at Procore, join our Talent Community. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact our benefits team here to discuss reasonable accommodations. Show more Show less
Posted 1 week ago
0 years
0 - 1 Lacs
Bengaluru
On-site
Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. Okta is seeking an experienced Software Test Engineer for our Identity Management Quality Engineering team. As part of product quality engineering, they need to ensure product releases are done with highest quality and reliability. Automation at every level is key for faster, robust and secure releases. The ideal candidate has solid experience in Java automation development, worked on scaling environments and has shown a passion to learn. Job Duties and Responsibilities: Automate API tests, End-to-End tests, reliability/scale tests Review requirements and design specs to develop relative test plans and test cases Work with engineering management to scope and plan engineering efforts Clearly communicate and document QE plans for scrum teams to review and comment Automate all critical features to maintain zero-debt cadence Release features with solid quality to customers Respond to Production issues/alerts and customer issues during on-call rotation Help with mentoring new hires and interns Minimum REQUIRED Knowledge, Skills, and Abilities: 6 months+ years of quality engineering with knowledge and hands-on test automation experience 6 months+ years experience with Selenium and/or API testing using Java 6 months+ years experience with Performance testing using Jmeter 6 months+ years of experience with (Splunk, Datadog, Grafana), SQL, Unix Experience with development of high-quality automation and software tests Ability to test software with minimum supervision and guidance Ability to quickly learn new technologies and provide input Education and Training: B.S. Computer Science or related field #LI-ASITRAY What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Okta The foundation for secure connections between people and technology Okta is the leading independent provider of identity for the enterprise. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 7,000 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. More than 19,300 organizations, including JetBlue, Nordstrom, Slack, T-Mobile, Takeda, Teach for America, and Twilio, trust Okta to help protect the identities of their workforces and customers.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
India
Remote
Python JD: Role Summary: We are seeking a skilled Python Developer with strong experience in data engineering, distributed computing, and cloud-native API development. The ideal candidate will have hands-on expertise in Apache Spark, Pandas, and workflow orchestration using Airflow or similar tools, along with deep familiarity with AWS cloud services. You’ll work with cross-functional teams to build, deploy, and manage high-performance data pipelines, APIs, and ML integrations. Key Responsibilities: Develop scalable and reliable data pipelines using PySpark and Pandas. Orchestrate data workflows using Apache Airflow or similar tools (e.g., Prefect, Dagster, AWS Step Functions). Design, build, and maintain RESTful and GraphQL APIs that support backend systems and integrations. Collaborate with data scientists to deploy machine learning models into production. Build cloud-native solutions on AWS, leveraging services like S3, Glue, Lambda, EMR, RDS, and ECS. Support microservices architecture with containerized deployments using Docker and Kubernetes. Implement CI/CD pipelines and maintain version-controlled, production-ready code. Required Qualifications: 3–5 years of experience in Python programming with a focus on data processing. Expertise in Apache Spark (PySpark) and Pandas for large-scale data transformations. Experience with workflow orchestration using Airflow or similar platforms. Solid background in API development (RESTful and GraphQL) and microservices integration. Proven hands-on experience with AWS cloud services and cloud-native architectures. Familiarity with containerization (Docker) and CI/CD tools (GitHub Actions, CodeBuild, etc.). Excellent communication and cross-functional collaboration skills. Preferred Skills: Exposure to infrastructure as code (IaC) tools like Terraform or CloudFormation. Experience with data lake/warehouse technologies such as Redshift, Athena, or Snowflake. Knowledge of data security best practices, IAM role management, and encryption. Familiarity with monitoring/logging tools like Datadog, CloudWatch, or Prometheus. Pyspark, Pandas, Data Transformation or Workflow experience is a MUST atleast 2 years Pay: Attractive Salary Interested candidate can call or whats app the resume @ 9092626364 Job Type: Full-time Benefits: Cell phone reimbursement Work from home Schedule: Day shift Weekend availability Work Location: In person
Posted 1 week ago
10.0 years
0 Lacs
Noida
On-site
Company Summary: DISH Network Technologies India Pvt. Ltd is a technology subsidiary of EchoStar Corporation. Our organization is at the forefront of technology, serving as a disruptive force and driving innovation and value on behalf of our customers. Our product portfolio includes Boost Mobile (consumer wireless), Boost Mobile Network (5G connectivity), DISH TV (Direct Broadcast Satellite), Sling TV (Over The Top service provider), OnTech (smart home services), Hughes (global satellite connectivity solutions) and Hughesnet (satellite internet). Our facilities in India are some of EchoStar’s largest development centers outside the U.S. As a hub for technological convergence, our engineering talent is a catalyst for innovation in multimedia network and communications development. Summary: Boost Mobile is our cutting-edge, standalone 5G broadband network that covers over 268 million Americans and a brand under EchoStar Corporation (NASDAQ: SATS). Our mobile carrier’s cloud-native O-RAN 5G network delivers lightning-fast speeds, reliability, and coverage on the latest 5G devices. Recently, Boost Mobile was named as the #1 Network in New York City, according to umlaut’s latest study! Job Duties and Responsibilities: Key Responsibilities: Manages public cloud infrastructure deployments, handles Jira, and troubleshoots Leads DevOps initiatives for US customers, focusing on AWS and 5G network functions Develops technical documentation and supports root cause analysis Deploys 5G network functions in AWS environments Expertise in Kubernetes and EKS for container orchestration Extensive experience with AWS services (EC2, ELB, VPC, RDS, DynamoDB, IAM, CloudFormation, S3, CloudWatch, CloudTrail, CloudFront, SNS, SQS, SWF, EBS, Route 53, Lambda) Orchestrates Docker containers with Kubernetes for scalable deployments Automates 5G Application deployments using AWS CodePipeline (CodeCommit/CodeBuild/CodeDeploy) Implements and operates containerized cloud application platform solutions Focuses on cloud-ready, distributed application architectures, containerization, and CI/CD pipelines Works on automation and configuration as code for foundational architecture related to connectivity across Cloud Service Providers Designs, configures, and manages cloud infrastructures using AWS services Experienced with EC2, ELB, EMR, S3 CLI, and API scripting Strong knowledge of Kubernetes operational building blocks (Kube API, Kube Scheduler, Kube Controller Manager, ETCD) Provides solutions to common Kubernetes errors (CreateContainerConfigError, ImagePullBackOff, CrashLoopBackOff, Kubernetes Node Not Ready) Knowledgeable in Linux/UNIX administration and automation Familiar with cloud and virtualization technologies (Docker, Azure, AWS, VMware) Supports cloud-hosted systems 24/7, including troubleshooting and root cause analysis Configures Kubernetes clusters for networking, load balancing, pod security, and certificate management Configures monitoring tools (Datadog, Dynatrace, AppDynamics, ELK, Grafana, Prometheus) Participates in design reviews of architecture patterns for service/application deployment in AWS Skills - Experience and Requirements: Education and Experience: Bachelors or Master's degree in Computer Science, Computer Engineering, or a related technical degree 10+ years related experience; or equivalent combination of education and experience 4+ years of experience supporting public cloud platforms 4+ years of experience with cloud system integration, support, and automation Skills and Qualifications: Must have excellent verbal and written communication Operational experience with Infrastructure as code solutions and tools, such as Ansible, Terraform, and Cloudformation Deep understanding of DevOps and agile methodologies Ability to work well under pressure and manage tight deadlines Proven track record of operational process change and improvement Deep understanding of distributed systems and microservices AWS certifications (Associate level or higher) are a plus Kubernetes certifications are a plus
Posted 1 week ago
3.0 - 6.0 years
5 - 5 Lacs
Noida
On-site
Location: Noida Experience: 3 to 6 years No. Of Openings: 1 Job Description Work on a team building cloud platform tools and solutions for HPC applications. Collaborate with other engineers to define strategy and technical platform roadmap, and drive the rapid implementation of appropriate technologies. Encourage value-driven innovation in the current framework and processes to continuously improve the efficiency of product development processes. Partner with client teams to prepare for the timely and smooth acceptance of deliverables into a production environment. Evaluate new tools and technologies based on current and future feature requirements, performance, cost effectiveness, and reliability. Work closely with development teams to understand requirements and apply industry knowledge to recommend build/buy solutions. Execution on all release engineering aspects of DevOps, including Configuration Management, Build and Deployment Management, Continuous Integration and Delivery. Review existing solutions with a fresh perspective to suggest improvements and optimizations. Job Specification Technologies We Use- Amazon AWS, Azure & Azure Devops, GCP, Kubernetes, Helm, Python, Terraform, PostgreSQL, Jenkins, Ubuntu Linux, Windows Server, Splunk, PagerDuty, Grafana, Prometheus, Bicep, CloudFormation, DataDog, ElasticSearch BS or MS in Computer Science or related technical discipline (or equivalent experience). Experience with cloud delivery platforms: AWS, Azure, GCP. Hands-on experience with one or more programming languages such as Python. Working knowledge of running and tuning large-scale applications in production. Hands-on experience with Kubernetes. Hands-on experience with CI/CD tooling such as Jenkins. Attention to detail in their code and output. Attention to operational excellence. Strong interpersonal skills to coordinate with other team members. Experience of coaching and mentoring software engineers for technical and professional growth a plus.
Posted 1 week ago
5.0 - 8.0 years
7 - 8 Lacs
Ahmedabad
On-site
Senior Full Stack Developer (Python, JavaScript, AWS, Cloud Services, Azure) Ahmedabad, India; Hyderabad, India Information Technology 315432 Job Description About The Role: Grade Level (for internal use): 10 The Team: S&P Global is a global market leader in providing information, analytics and solutions for industries and markets that drive economies worldwide. The Market Intelligence (MI) division is the largest division within the company. This is an opportunity to join the MI Data and Research’s Data Science Team which is dedicated to developing cutting-edge Data Science and Generative AI solutions. We are a dynamic group that thrives on innovation and collaboration, working together to push the boundaries of technology and deliver impactful solutions. Our team values inclusivity, continuous learning, and the sharing of knowledge to enhance our collective expertise. Responsibilities and Impact: Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development. Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3. Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions. Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315432 Posted On: 2025-06-02 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Key Responsibilities Build and maintain backend services in Python, writing clean, maintainable, and well-tested code. Develop and scale public APIs, ensuring high performance and reliability. Work with GraphQL services, contributing to schema design and implementation of queries, mutations, and resolvers. Collaborate cross-functionally with frontend, product, and DevOps teams to ship features end-to-end. Containerize services using Docker and support deployments within Kubernetes environments. Use GitHub Actions to manage CI/CD workflows, including test automation and deployment pipelines. Participate in code reviews, standups, and planning sessions as part of an agile development process. Take ownership of features and deliverables with guidance from senior engineers. Required Skills Python expertise: Strong grasp of idiomatic Python, async patterns, type annotations, unit testing, and modern libraries. API development: Experience building and scaling RESTful and/or GraphQL APIs in production. GraphQL proficiency: Familiarity with frameworks like Strawberry, Graphene, or similar. Containerization: Hands-on experience with Docker and container-based development workflows. GitHub Actions CI/CD: Working knowledge of GitHub Actions for automating tests and deployments. Team collaboration: Effective communicator with a proactive, self-directed work style. Preferred Qualifications Kubernetes: Experience deploying or troubleshooting applications in Kubernetes environments. AWS: Familiarity with AWS services such as ECS, EKS, S3, RDS, or Lambda. Healthcare: Background in the healthcare industry or building patient-facing applications. Monitoring and security: Familiarity with observability tools (e.g., Datadog, Prometheus) and secure coding practices About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Spreetail propels brands to increase their ecommerce market share across the globe while improving their operational costs. Learn how we are building one of the fastest-growing ecommerce companies in history: www.spreetail.com . As a Principal Software Engineer, you’ll lead a cross-functional team to build and scale Merch Tech’s data-driven platforms that drive decision-making for hundreds of vendors. You’ll influence product, strategy, and tech direction, and collaborate with executive stakeholders in Merchandising, Supply Chain, and Brand Management. This position is remote. This position will be remote in the country of India. How You Will Achieve Success Ownership of BEx (Brand Experience Platform) roadmap and execution to increase adoption rate & reduce issues in data and availability. Building scalable backend systems and usable front-end experiences that increase adoption and drive usability. Improving UI/UX by reducing latency, implementing data consistency, and alerting mechanisms. Driving measurable impact on GMV, return rates, and EBITDA impact in implementing scalable solutions across the merchandising organization. Leverage the latest AI technologies to accelerate development work and set up automation for unit testing. Lead the charge for Agentic AI deployment. Establishing a culture of fast experimentation and tight feedback loops; managing the team to implement quick MVPs and scaling solutions that work. What Experiences Will Help You In This Role 8–12 years in software engineering, including experience in platform ownership or growth-stage environments. Full-stack experience in Python, SQL, Node.js , and React. Along with this, we are looking for experience in Datadog or similar. 80% hands on development and 20% management/ Experience in data platform engineering, front-end/backend development, and AWS-based infrastructure. Prior experience delivering reporting or workflow automation platforms is a plus. Strong ability to partner with non-tech stakeholders and drive measurable business outcomes. Comfortable with ambiguity and fast iteration cycles. A nice to have is Java. This is a remote position and requires candidates to have an available work-from-home setup: Desktop/Laptop system requirements: 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable, or fiber wired internet service with a 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution $60,000 - $80,000 a year This is a remote position and requires candidates to have an available work-from-home setup Desktop/Laptop System Requirements 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable or fiber wired internet service with 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution. Please be aware of scammers. Spreetail will only contact you through Lever or the spreetail.com domain. Spreetail will never ask candidates for money during the recruitment process. Please reach out to careers@spreetail.com directly if you have any concerns. Emails from @spreetailjobs.com are fraudulent. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Spreetail propels brands to increase their ecommerce market share across the globe while improving their operational costs. Learn how we are building one of the fastest-growing ecommerce companies in history: www.spreetail.com . As a Software Development Manager, you’ll lead a cross-functional team to build and scale Merch Tech’s data-driven platforms that drive decision-making for hundreds of vendors. You’ll influence product, strategy, and tech direction, and collaborate with executive stakeholders in Merchandising, Supply Chain, and Brand Management. This position is remote. This position will be remote in the country of India. How You Will Achieve Success Ownership of BEx (Brand Experience Platform) roadmap and execution to increase adoption rate & reduce issues in data and availability. Building scalable backend systems and usable front-end experiences that increase adoption and drive usability. Improving UI/UX by reducing latency, implementing data consistency, and alerting mechanisms. Driving measurable impact on GMV, return rates, and EBITDA impact in implementing scalable solutions across the merchandising organization. Leverage the latest AI technologies to accelerate development work and set up automation for unit testing. Lead the charge for Agentic AI deployment. Establishing a culture of fast experimentation and tight feedback loops; managing the team to implement quick MVPs and scaling solutions that work. What Experiences Will Help You In This Role 8–12 years in software engineering, including experience in platform ownership or growth-stage environments. Full-stack experience in Python, SQL, Node.js , and React. Along with this, we are looking for experience in Datadog or similar. 80% hands on development and 20% management/ Experience in data platform engineering, front-end/backend development, and AWS-based infrastructure. Prior experience delivering reporting or workflow automation platforms is a plus. Strong ability to partner with non-tech stakeholders and drive measurable business outcomes. Comfortable with ambiguity and fast iteration cycles. A nice to have is Java. This is a remote position and requires candidates to have an available work-from-home setup: Desktop/Laptop system requirements: 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable, or fiber wired internet service with a 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution $60,000 - $80,000 a year This is a remote position and requires candidates to have an available work-from-home setup Desktop/Laptop System Requirements 4th generation or higher, at least Intel i3 or equivalent processor; at least 4GB RAM; Windows 10 and above or MAC OSX operating system You are required to provide your own dual monitors A strong and stable internet connection (A DSL, cable or fiber wired internet service with 10 Mbps plan or higher for primary connection) PC Headset A high-definition (HD) external or integrated webcam with at least 720p resolution. Please be aware of scammers. Spreetail will only contact you through Lever or the spreetail.com domain. Spreetail will never ask candidates for money during the recruitment process. Please reach out to careers@spreetail.com directly if you have any concerns. Emails from @spreetailjobs.com are fraudulent. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Who we are: BigID is an innovative tech startup that focuses on solutions for data security, compliance, privacy, and AI data management. We're leading the market in all things data: helping our customers reduce risk, drive business innovation, achieve compliance, build customer trust, make better decisions, and get more value from their data. We are building a global team passionate about innovation and next-gen technology. BigID has been recognized for: BigID Named Hot Company in Artificial Intelligence and Machine Learning at the 2024 Global InfoSec Awards Citizens JMP Cyber 66 List of Hottest Privately Held Cybersecurity Companies CRN 100 list named BIgID as one of the 20 Coolest Identity Access Management and Data Protection Companies Of 2024 (2 years running) DUNS 100 Best Tech Companies to Work For in 2024 Top 3 Big Data and AI Vendors to Watch' in the 2023 BigDATAwire Readers and Editors Choice Awards 2024 Inc. 5000 list for the 4th consecutive year! Shortlisted for the 2024 AI Awards in the category of Best Use of AI in Cybersecurity At BigID, our team is the foundation of our success. Join a people-centric culture that is fast-paced and rewarding: you’ll have the opportunity to work with some of the most talented people in the industry who value innovation, diversity, integrity, and collaboration. Who we seek: We’re looking for a Site Reliability Engineer to join our Engineering team in Hyderabad. The ideal candidate will be responsible for monitoring and responding to system alerts, with experience in tools such as Datadog. They should also be proficient in efficiently analysing logs across various dashboards. What you’ll do: Metrics : Implement comprehensive service metrics to track and report on system reliability, performance, and efficiency Optimization : Monitor system performance, identify bottlenecks, and execute pipeline optimization Collaborate with Scrum teams and other stakeholders to identify potential risks. Analysis : Conduct post-incident reviews to prevent recurrence and refine the system reliability framework What you’ll bring: A bachelor's or master's degree in computer science, information systems, or a related technical field Between 4- 7 years of experience as a Site Reliability Engineer Proficiency in programming languages such as Python, Go, or Java In-depth understanding of operating systems, networking, and cloud services Experience with monitoring tools (for example, Datadog, ELK, Redash) Proven experience in managing large-scale distributed systems and understanding the principles of scalability and reliability Familiarity with DevOps culture and practices, and experience with CI/CD systems Excellent diagnostic and problem-solving skills, with the ability to analyze complex systems and data Certifications in cloud services, networking, or systems administration - Advantage What’s in it for you?! Our people are the foundation of our success, and we prioritize offering a wide range of benefits that make our team happier and healthier. Equity participation - everyone shares in our success Hybrid work Opportunities for professional growth Team fun & company outings Statutory benefits and leave benefits Health Insurance coverage Our Values: We look for people who embody our values - Care, Do, Try & Shine. Care - We care about our customers and each other Do - We do what it takes to make a positive impact Try - We try our best and we don’t give up Shine - We shine and make it our mission to always stand out We’re committed to creating a culture of inclusion and equality – across race, gender, sexuality, and disability – where innovation and growth thrive, every voice is heard, and everybody belongs. Learn more about us here. CPRA Employee Privacy Notice: CA Must be able to exercise independent judgment with little or no oversight. BigID is an E-Verify Participant. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Datadog, a popular monitoring and analytics platform, has been gaining traction in the tech industry in India. With the increasing demand for professionals skilled in Datadog, job opportunities are on the rise. In this article, we will explore the Datadog job market in India and provide valuable insights for job seekers looking to pursue a career in this field.
These cities are known for their thriving tech industries and are actively hiring for Datadog roles.
The average salary range for Datadog professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
A typical career path in Datadog may include roles such as Datadog Administrator, Datadog Developer, Datadog Consultant, and Datadog Architect. Progression usually follows a path from Junior Datadog Developer to Senior Datadog Developer, eventually leading to roles like Datadog Tech Lead or Datadog Manager.
In addition to proficiency in Datadog, professionals in this field are often expected to have skills in monitoring and analytics tools, cloud computing (AWS, Azure, GCP), scripting languages (Python, Bash), and knowledge of IT infrastructure.
With the increasing demand for Datadog professionals in India, now is a great time to explore job opportunities in this field. By honing your skills, preparing for interviews, and showcasing your expertise, you can confidently apply for Datadog roles and advance your career in the tech industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.