Home
Jobs
Companies
Resume

3976 Logging Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! Engineer – DevOps Location : Bengaluru About us: To strengthen our WITEC team in Bengaluru, we are now looking for – Lead/ Engineer – DevOps Role Summary & Essential responsibilities: The DevOps Engineer is responsible for performing CI/CD and automation design / validation activities under the project responsibility of the Technical Project Manager and under the technical responsibility of the software architect. Respect internal processes including coding rules. Write documentation in accordance with the implementation made Meet the Quality, Cost and Time objectives set by the Technical Project Manager. Qualification / Requirement: Bachelor / Master’s in engineering in Computer Science with web option CS, IT or related field Abilities: 6 to 10 years of hands on experience as DevOps Engineer Profile: Good understanding of Linux systems and networking Good knowledge of CI/CD tools, GitLab Good knowledge in containerization technologies such as Docker Experience with scripting languages such as Bash and Python Hands-on setting up CI/CD pipelines and configuring Virtual Machines Experience with C/C++ build tools like CMake and Conan is a must Experience in setting up the pipelines in Gitlab for build, Unit testing, static analysis Experience with infrastructure as code tools like Terraform or Ansible is a plus Experience with monitoring and logging tools such as ELK Stack or Prometheus/Grafana, … Strong problem-solving skills and ability to troubleshoot production issues A passion for continuously learning and staying up-to-date with modern technologies and trends in the DevOps field. Project management and workflow tools like Jira, SPIRA, Teams Planner, Polarion. Process: SVN, VSS, GIT and Bitbucket source control/configuration management tool Development methodology: AGILE (SCRUM/Kanban) Soft skills: English: good level Autonomous Good Interpersonal and communication skill Good synthesis skill Solid team player and able to handle multiple tasks and manage them time efficiently. Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! http://www.WabtecCorp.com Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know. Show more Show less

Posted 16 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us: People Tech Group is a leading Enterprise Solutions, Digital Transformation, Data Intelligence, and Modern Operation services provider. We have started in the year 2006 at Redmond, Washington, USA and got expanded to India, and In India, we are based out of Hyderabad, Bangalore, Pune and Chennai with overall strength of 1500+ employees. We have our presence over 4 different countries US/Canada/India /Costa Rica. One of the Recent Development happened with the company, we have got acquired by Quest Global Company, Quest Global is One of the world's largest engineering Solution provider Company, it has 20,000+ employee strength, with 70+ Global Delivery Service centers, Headquarters are based in Singapore. Going forward, we all are part of Quest Global Company. Position: DevOps Engineer Company: People Tech Group Experience: 5 yrs Location: Bengaluru Job Description: Key Responsibilities: Provisioned and secured cloud infrastructure using Terraform/ AWS CloudFormation Fully automated GitLab CI/CD pipelines for application builds, tests, and deployment, integrated with Docker containers and AWS ECS/EKS Continuous integration workflows with automated security checks, testing, and performance validation A self-service developer portal providing access to system health, deployment status, logs, and documentation for seamless developer experience AWS CloudWatch Dashboards and CloudWatch Alarms for real-time monitoring of system health, performance, and availability Centralized logging via CloudWatch Logs for application performance and troubleshooting Complete documentation for all automated systems, infrastructure code, CI/CD pipelines, and monitoring setups Monitoring - Splunk - Ability to create dashboards, alerts, integrating with tools like MS teams. Required Skills: Master's or bachelor's degree in computer science/IT or equivalent Expertise in Shell scripting Familiarity with Operating system - Windows & linux Experience in Git - version control Ansible - Good to have Familiarity with CI/CD pipelines - GitLab Docker, Kubernetes, OpenShift - Strong in Kubernetes administration Experience in Infra As Code – Terraform & AWS - CloudFormation Familiarity in AWS services like EC2, Lambda, Fargate, VPC, S3, ECS, EKS Nice to have – Familiarity with observability and monitoring tools like Open Telemetry setup, Grafana, ELK stack, Prometheus Show more Show less

Posted 16 hours ago

Apply

1.3 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: 1.3+ years of MuleSoft relevant experience. 2.Design and Develop solution using Mule 4 ESB. 3.Identify, analyze, and develop interfaces and integration flows using Mule ESB Anypoint platform including 4.Mule Runtime, Connectors, Design Centre, and API management. 5.Experience using SOAP UI and Postman as well as first hand knowledge of XML/XSLT, JSON, CSV and HTTP standards 6.Mulesoft RAML , development of system, process & experience API. Experience in using HTTP, Database, JMS,SFTP, file connectors 7.Having hands-on experience with unit testing of MuleSoft implementations . 8.Must understand the Synchronous / Asynchronous communication patterns Hands on Experience on Batch Scope 9.Hands on experience on DataWeave 10.Hands on experience on Exception Handling in MuleSoft 11.Experience in writing Munits 12.Good exposure to integration projects 13.Deep experience with Any point Platform, Flow Design, API Design , Dataweave(1.0/2.0), CloudHub. 14.Understanding of MuleSoft Deployment On Prem, on Cloud and Hybrid. 15Certifications: MuleSoft Certifications. 16Understanding concepts on encryption, decryption, security, logging,rate limiting, throttling, scalability and securing solutions. 17.Experience on commands of GITLab . 18.Knowledge on hosting external APIs. 19.Hands on Experience of MongoDB connectivity & its functions. 20.Knowledge on Banking Domain Configuring TLS and Key Store 21.Configuration, Encrypting and Decrypting the Messages using Java / Mule components Show more Show less

Posted 16 hours ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Senior Cloud DevOps Engineer Experience: 7-15 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled GCP DevOps Engineer to join our dynamic team. Job Description Deep GCP Services Mastery: Profound understanding and hands-on experience with core GCP services (Compute Engine, CloudRun, Cloud Storage, VPC, IAM, Cloud SQL, BigQuery, Cloud Operations Suite). Infrastructure as Code (IaC) & Configuration Management: Expertise in Terraform for GCP, and proficiency with tools like Ansible for automating infrastructure provisioning and management. CI/CD Pipeline Design & Automation: Skill in building and managing sophisticated CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI) for applications and infrastructure on GCP. Containerisation & Orchestration: Advanced knowledge of Docker and extensive experience deploying, managing, and scaling applications on CloudRun and/or Google Kubernetes Engine (GKE). API Management & Gateway Proficiency: Experience with API design, security, and lifecycle management, utilizing tools like Google Cloud API Gateway or Apigee for robust API delivery. Advanced Monitoring, Logging & Observability: Expertise in implementing and utilizing comprehensive monitoring solutions (e.g., Google Cloud Operations Suite, Prometheus, Grafana) for proactive issue detection and system insight. DevSecOps & GCP Security Best Practices: Strong ability to integrate security into all stages of the DevOps lifecycle, implement GCP security best practices (IAM, network security, data protection), and ensure compliance. Scripting & Programming for Automation: Proficient in scripting languages (Python, Bash, Go) to automate operational tasks, build custom tools, and manage infrastructure programmatically. GCP Networking Design & Management: In-depth understanding of GCP networking (VPC, Load Balancing, DNS, firewalls) and the ability to design secure and scalable network architectures. Application Deployment Strategies & Microservices on GCP: Knowledge of various deployment techniques (blue/green, canary) and experience deploying and managing microservices architectures within the GCP ecosystem. Leadership, Mentorship & Cross-Functional Collaboration: Proven ability to lead and mentor DevOps teams, drive technical vision, and effectively collaborate with development, operations, and security teams. System Architecture, Performance Optimization & Troubleshooting: Strong skills in designing scalable and resilient systems on GCP, identifying and resolving performance bottlenecks, and complex troubleshooting across the stack. Regards, ValueLabs Show more Show less

Posted 16 hours ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Fortanix is a dynamic start-up solving some of the world's most demanding data protection challenges for companies and governments around the world. Our disruptive technology maintains data privacy across its entire lifecycle -- at rest, in motion, and in use across any enterprise IT infrastructure -- public cloud, on-premise, hybrid cloud, and SaaS. With key strategic partners like Microsoft, Intel, ServiceNow, and Snowflake, Fortanix customers like PayPal, Google & Adidas are reaping the benefits. Recognized by Gartner as a "Cool Vendor", Fortanix is revolutionizing cyber security. Join the revolution! At Fortanix we are redefining what cloud security means. Our customers use our software platform to build and run software much more securely than was previously possible. We are seeking software engineers to extend the capability and performance of our cloud security solutions As a Software Engineer at Fortanix, you will play a critical role in designing, building, and maintaining our observability platform. You will work closely with cross-functional teams to enhance and optimize the performance and scalability of our cloud security solutions. In this role, you will: Collaborate with product managers and other engineers to determine customer requirements and translate them into technical solutions Design, develop, and deploy observability features and functionality for our cloud security platform Optimize and scale our observability infrastructure to handle large volumes of data efficiently Participate in code reviews and provide constructive feedback to ensure the overall quality and stability of the codebase Contribute to the continuous improvement of software development processes and practices We are looking for someone who: Has a deep understanding of observability concepts, tools, and techniques, including monitoring, logging, and distributed tracing Has strong software engineering skills and experience with backend development Is proficient in at least one programming language, such as Rust, Go, Java or C++ Has experience with cloud-based technologies, preferably AWS, Azure and GCP Is proficient with database architecture, scaling, and optimization Has competence with CI/CD procedures and microservice architecture Is familiar with containerization technologies like Docker and Kubernetes Has excellent problem-solving and analytical skills Is self-motivated and can work effectively both independently and as part of a team Communicates effectively and enjoys collaborating with others If you are passionate about observability and want to make a meaningful impact in the field of cloud security, we would love to hear from you. Join us at Fortanix and be part of our mission to redefine what cloud security means. Requirements Minimum of 5 years of professional experience as a software engineer Bachelor's degree in Computer Science, Engineering, or a related field Strong experience in backend development and building distributed systems Proficiency in at least one programming language, such as Rust, Go, Java or C++ Experience with cloud-based technologies, preferably AWS, Azure and GCP Familiarity with containerization technologies like Docker and Kubernetes Strong problem-solving and analytical skills Excellent communication and collaboration skills Benefits Mediclaim Insurance - Employees and their eligible dependents including dental coverage Personal Accident Insurance Internet Reimbursement Employee Stock Options Fortanix is an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, religion, age, gender, gender identity, sexual orientation, or any other status. If you're interested in working in a fast-growing, exciting working environment - we encourage you to apply ! Show more Show less

Posted 16 hours ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description : Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 16 hours ago

Apply

3.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Backend Developer Experience: 3 - 8 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Office (Noida) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Nodejs, Java TatvaCare (One of Uplers' Clients) is Looking for: About TatvaCare TatvaCare is transforming care practices to deliver positive health outcomes. TatvaCare, a startup in the Indian health tech landscape, is catalyzing the transformation of care practices through digitisation. Our product portfolio includes TatvaPractice, an advanced EMR and Knowledge platform for healthcare professionals, and MyTatva, a Digital Therapeutics application designed to manage chronic diseases like Fatty Liver, COPD, and Asthma. Through these initial solutions and more to come, we aim to bridge the gap in healthcare, connecting professionals and patients. We are committed to revolutionizing healthcare in India, promoting efficient, patient-centric care, and optimizing outcomes across the healthcare spectrum. MyTatva: A DTx app that aids adherence to doctor-recommended lifestyle changes. TatvaPractice: An ABDM-certified EMR platform to Enhance a doctor’s practice. Our vision is not just about digitizing records; it's about fostering a healthcare ecosystem where efficiency and empathy converge, ultimately leading to a health continuum. Job Description: TatvaCare is seeking a dedicated Backend Developer to join our innovative team. If you are passionate about creating scalable and efficient systems and possess a strong proficiency in backend technologies, we would love to meet you. In this role, you will work with cutting-edge technologies to support our backend services and ensure seamless data flow and integration across various platforms. Responsibilities: System Design and Development Design, implement, and maintain robust backend systems and APIs. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Participate in architecture and design discussions to improve system performance and scalability. Code Quality and Best Practices Write clean, maintainable, and well-documented code. Conduct code reviews to ensure adherence to best coding practices and standards. Debug and troubleshoot existing applications to optimize performance. Collaboration and Communication Work closely with cross-functional teams to gather requirements and transform them into technical solutions. Participate in Agile/Scrum ceremonies to contribute to project planning and progress tracking. Communicate technical concepts effectively to non-technical stakeholders. Cloud Infrastructure Management Manage and optimize applications hosted on cloud platforms (AWS, GCP). Implement monitoring and logging tools to ensure high availability and performance of applications. Assist in designing and enforcing security protocols for cloud infrastructure. Qualifications: Bachelor's Degree in Computer Science or related field. 3-8 years of proven experience as a Backend Developer or similar role. Strong proficiency in Node.js and any of the following languages: Python, Golang, Java, Ruby on Rails. Experience with frameworks like Spring Boot for Java development. Hands-on experience with cloud services such as AWS, GCP, and Azure is a plus. Familiarity with system design principles and best practices. Understanding of RESTful APIs and microservices architecture. Proficient in version control systems such as Git. Excellent problem-solving abilities and attention to detail. Ability to work independently as well as in a team-setting. At TatvaCare, we embrace diversity and are committed to creating an inclusive environment for all employees. If you are excited about this opportunity and have the required skills, we encourage you to apply. Together, let’s build better technology solutions! How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply and register or log in to our portal 2.Upload updated Resume & complete the Screening Form 3. Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 16 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role Description Role Proficiency: Leverage specialist testing knowledge to define and implement test practices and strategies for a portfolio or program Outcomes Analyse Recommend and Implement testing practices in projects Assess existing tools/frameworks and propose enhancements Customize and Implement Testing Tools and Frameworks Define Test Strategy for a Program Conduct Technical Review of Projects and Identify risks/issues Drive the capture and delivery of Test Metrics Measures Of Outcomes Test Coverage Test Automation Coverage Savings from Optimization Savings from Automation Defect Removal efficiency No: of Kaizens initiated and implemented Cost savings for the customer Cost saving from carrying out Test Optimization exercise Continuous service improvements Outputs Expected Test Strategy: Define and implement test strategy Define and baseline tool strategies Define and document test environment and test data strategies Perform feasibility study for recommended solutions Knowledge Management Publish best practices and guidelines standards white paper etc. Test Reporting Conduct cost benefit analysis Test Design Development Execution Create strategies for test optimization Carry out gap analysis and identify improvement areas Identify program goals and define KPI's and SLA's Identify and implement industry wide best practices Test Planning Perform test maturity assessments provide recommendations and define roadmaps Identify candidate for automation by prioritization Skill Examples Ability to define test strategies Ability to manage and evaluate the test tools and frameworks Ability to create re-usable assets Ability to identify test practice gaps and provide recommendations Ability to perform test maturity assessments Ability to define service improvement roadmaps Knowledge Examples Knowledge of Automation Techniques Knowledge of Testing methodologies Knowledge of Test Automation tools and frameworks Knowledge of Automation ROI analysis Knowledge of industry wide KPI's Knowledge of Test Data and Test Env requirement identification Additional Comments Job Title: Quality Analyst Lead – ServiceNow Experience Required: 10+ Years Job Summary: We are looking for a seasoned Quality Analyst Lead with 10+ years of experience in software testing and quality assurance, including at least 3+ years in a leadership or delivery management role. The ideal candidate will have a strong hands-on testing background, deep understanding of testing methodologies, and experience in managing QA delivery across complex projects. Knowledge of the ServiceNow platform is essential, including testing across ITSM, ITOM, CSM, or other modules. ________________________________________ Key Responsibilities: Quality Assurance & Testing: Design and implement comprehensive test strategies, plans, and scripts (manual and automated) for various application modules, especially in ServiceNow. Ensure end-to-end test coverage including functional, integration, regression, performance, and user acceptance testing. Work closely with business analysts and developers to understand requirements and translate them into test scenarios. Lead defect management processes – logging, tracking, triaging, and ensuring timely resolution. Perform hands-on testing activities when required, especially during critical releases or complex modules. Leadership & Delivery Management: Lead and manage a team of QA Analysts, providing guidance, mentorship, and performance feedback. Drive QA delivery planning, execution, and resource allocation to meet project timelines and quality goals. Collaborate with cross-functional teams including project managers, developers, architects, and product owners. Participate in Agile/Scrum ceremonies and ensure QA involvement across all sprints and releases. Monitor and report key QA metrics like test progress, defect leakage, coverage, and quality trends. ServiceNow QA Expertise: Ensure thorough testing of ServiceNow modules, including custom workflows, UI policies, business rules, and integrations. Validate configuration, workflows, and forms within ServiceNow instances as per user stories and requirements. Familiarity with Automated Test Framework (ATF) in ServiceNow is a plus. Support testing of ServiceNow integrations with third-party applications and external systems. ________________________________________ Required Skills & Qualifications: 10+ years of experience in Software Testing/Quality Assurance. 3+ years in a QA Lead or Delivery Management role. Solid understanding of QA best practices, test planning, execution, and reporting. Strong experience with testing tools like JIRA, ALM, Selenium, Postman, SoapUI, etc. Hands-on experience with Agile/Scrum methodologies. Proficient in ServiceNow platform and module-level testing. Experience in API testing, data validation, and integration testing. Strong communication, problem-solving, and stakeholder management skills. Certification in ServiceNow Fundamentals or QA tools (preferred). ________________________________________ Preferred Qualifications: Experience working with international clients or distributed Agile teams. Familiarity with automated testing frameworks, especially in ServiceNow (e.g., ATF). ISTQB or similar QA certification is a plus. Exposure to ITSM, CSM, ITOM, or other ServiceNow modules is highly desirable. Skills Servicenow,Atf,Software Testing,Delivery Management Show more Show less

Posted 16 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Dynamics 365 .Net (.NET 8 + Azure AD B2C + SQL + Power Platform) Location: Hyderabad Experience Required: 8–10 years Engagement Type: Full-time / Contract Core Responsibilities: Backend (.NET 8 APIs): Develop RESTful services using .NET 8 with role-based access. Secure endpoints with OAuth2 and implement RBAC across APIs. Integrate with Dynamics 365, Azure SQL, and Power Automate. Identity Management (Azure AD B2C): Configure sign-up/sign-in policies and federated identity with Google, Apple, LinkedIn, etc. Handle access tokens, session handling, and claims transformation. Coordinate identity flows across mobile and web apps. Data Layer (Azure SQL): Design normalized schema for transactional data. Write performant SQL queries, stored procedures, and indexing strategies. Ensure data encryption at rest and in transit. Support audit logging and access logging via SQL telemetry. Integration (Power Automate + Dynamics 365): Build workflows for financial tracking, CRM entity updates, and external integrations. Use custom connectors or HTTP actions to integrate with .NET APIs. Customize Dynamics 365 entities and business rules as required by workflows. Manage data consistency and flow across Power Platform and backend. Technical Skills Required: Area Skill Backend .NET Core / .NET 8, REST APIs, RBAC Identity Azure AD B2C, OAuth2, OpenID Connect, External IdP Federation Database Azure SQL, T-SQL, schema design, query optimization CRM/Workflow Power Automate, Dynamics 365 customization, custom flows DevOps Git, Azure DevOps (pipelines, releases), CI/CD familiarity Security SSL/TLS, token-based access, encrypted data handling Minimum Requirements: Experience building scalable .NET Core / .NET 8 applications. Hands-on with Azure AD B2C setup and federation scenarios. Proficiency in Azure SQL database development and optimization. Experience with Power Platform (especially Power Automate) and Dynamics 365. Ability to work independently across components with clean integration boundaries. Shanmukh, Shanmukh.siva@navasoftware.com Show more Show less

Posted 16 hours ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Experience: 6 + years Mandatory to have working experience as SRE Lead in the Retail domain or as Site Reliability Engineer (SRE) at customer work location in the e-com domain. Should be able to work on rotational shift. Must have experience in Production Application Support of OMS (prefer to have Blue Yonder) Must know how retail platforms upstream and downstream integrations work with various tracks such as dot com, warehouse management, stores etc. Must have skills in any of the automation languages like Python, shell, Java to automate periodic OMS related task/SRE task. Should know how to gather SRE requirement from Tech and non-tech aspect from customer. Must have skills in interacting with Level 2 and Level 3 Deve support experience in eCommerce platforms. Hands on experience in Monitoring, Logging, Alerting, Dashboarding, and report generation in any monitoring tools such as AppDynamics/Splunk/Dynatrace/Datadog/CloudWatch/ELK/Prome/NewRelic). Must have knowledge in ITIL framework specifically on Alerts, Incident, change management, CAB, Production deployments, Risk and mitigation plan, SLA, SLI Should be able to lead P1 calls, brief about the P1 to customer, proactive in gathering leads/ customers into the P1 calls till RCA. Experience working with postman. Should have knowledge on building and executing SOP, runbooks, handling any ITSM platforms (JIRA/ServiceNow/BMC Remedy) Must know how to work with the Dev team, cross functional teams across time zones. Should be able to generate WSR/MSR by extracting the tickets from ITSM platforms. Show more Show less

Posted 17 hours ago

Apply

3.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Backend Developer Experience: 3 - 8 Years Ex pSalary : Competitiv ePreferred Notice Perio d: Within 30 Day sOpportunity Type : Office (Noida )Placement Type : Permanen t(*Note: This is a requirement for one of Uplers' Clients )Must have skills: Nodejs, Jav a TatvaCare (One of Uplers' Clients) is Looking fo r:About TatvaCa reTatvaCare is transforming care practices to deliver positive health outcomes. TatvaCare, a startup in the Indian health tech landscape, is catalyzing the transformation of care practices through digitisation. Our product portfolio includes TatvaPractice, an advanced EMR and Knowledge platform for healthcare professionals, and MyTatva, a Digital Therapeutics application designed to manage chronic diseases like Fatty Liver, COPD, and Asthma. Through these initial solutions and more to come, we aim to bridge the gap in healthcare, connecting professionals and patients. We are committed to revolutionizing healthcare in India, promoting efficient, patient-centric care, and optimizing outcomes across the healthcare spectru m.MyTatva: A DTx app that aids adherence to doctor-recommended lifestyle change s.TatvaPractice: An ABDM-certified EMR platform to Enhance a doctor’s practic e.Our vision is not just about digitizing records; it's about fostering a healthcare ecosystem where efficiency and empathy converge, ultimately leading to a health continuu m. Job Descript ion:TatvaCare is seeking a dedicated Backend Developer to join our innovative team. If you are passionate about creating scalable and efficient systems and possess a strong proficiency in backend technologies, we would love to meet you. In this role, you will work with cutting-edge technologies to support our backend services and ensure seamless data flow and integration across various platfo rms. Responsibili ties:System Design and Develo pmentDesign, implement, and maintain robust backend systems and APIs.Collaborate with front-end developers to integrate user-facing elements with server-side l ogic.Participate in architecture and design discussions to improve system performance and scalabi lity.Code Quality and Best Prac ticesWrite clean, maintainable, and well-documented code.Conduct code reviews to ensure adherence to best coding practices and stand ards.Debug and troubleshoot existing applications to optimize perform ance.Collaboration and Communic ationWork closely with cross-functional teams to gather requirements and transform them into technical solut ions.Participate in Agile/Scrum ceremonies to contribute to project planning and progress trac king.Communicate technical concepts effectively to non-technical stakehol ders.Cloud Infrastructure Manag ementManage and optimize applications hosted on cloud platforms (AWS, GCP).Implement monitoring and logging tools to ensure high availability and performance of applicat ions.Assist in designing and enforcing security protocols for cloud infrastruc ture. Qualifica tions:Bachelor's Degree in Computer Science or related field.3-8 years of proven experience as a Backend Developer or similar role.Strong proficiency in Node.js and any of the following languages: Python, Golang, Java, Ruby on Rails.Experience with frameworks like Spring Boot for Java develo pment.Hands-on experience with cloud services such as AWS, GCP, and Azure is a plus.Familiarity with system design principles and best prac tices.Understanding of RESTful APIs and microservices archite cture.Proficient in version control systems such a s Git.Excellent problem-solving abilities and attention to d etail.Ability to work independently as well as in a team-se tting. At TatvaCare, we embrace diversity and are committed to creating an inclusive environment for all employees. If you are excited about this opportunity and have the required skills, we encourage you to apply. Together, let’s build better technology sol utions! How to apply for this opp ortunity:Easy 3-Step P r ocess:1. Click On Apply and register or log in to o ur portal2.Upload updated Resume & complete the Scree ning Form3. Increase your chances of getting shortlisted & meet the client for the I nterview!Abou t Uplers:Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in thei r career.(Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waitin g for you! Show more Show less

Posted 17 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 17 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 17 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description The Delivery Risk Manager serves as the central escalation point between Customer Delivery Operations and Supply Chain teams, ensuring consistent risk management processes across regions. This role leads delivery risk reviews, manages the CDO tool, drives resolution of supply issues, and provides executive-level reporting to support timely decision-making and business continuity. How You Will Contribute And What You Will Learn Act as the single point of contact for managing Customer Delivery Operations (CDO) escalations, ensuring smooth information flow between Delivery, Supply Chain, and Market Operations teams. Own and drive consistent execution of the CDO and Delivery Risk Management process across regions, ensuring process adherence and alignment with governance standards. Administer and maintain the CDO tool; support local users and ensure timely, accurate updates, escalation logging, and ticket-level adjustments. Lead weekly Risk & Opportunity (R&O) calls and executive-level delivery risk reviews, coordinating with stakeholders to track risks, opportunities, and recovery actions. Interface with Supply Chain contacts to escalate delivery risks, align on recovery plans, and arbitrate cross-BL component allocation when needed. Facilitate stakeholder discussions (PMs, SC, Market Ops, Logistics) to define and drive mitigation plans and validate Rest Value/Risk assessments. Ensure accurate reporting of CDO data in business reviews and deliver weekly executive-level R&O reports across markets and business groups. Track and report the net value of sales impacted by hardware delivery issues post-quarter, enabling insight into lost opportunities and operational impact. Key Skills And Experience You Have: Strong understanding of supply chain management and delivery operations. Excellent communication and interpersonal skills to effectively interface with various teams and stakeholders. Proficiency in using reporting tools like Power BI and managing data consistency. Ability to lead and facilitate discussions and meetings with diverse groups. Strong analytical skills to assess risks and opportunities and make informed decisions. Experience in managing escalation processes and coordinating recovery plans. Ability to work collaboratively in a fast-paced environment and adapt to changing priorities. Detail-oriented with a focus on achieving consistent and updated information in management reports. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. Show more Show less

Posted 17 hours ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Are you ready to make your mark with a true industry disruptor? ZineOne, a subsidiary of Session AI, the pioneer of in-session marketing, is looking to add talented team members to help us grow into the premier revenue tool for e-commerce. We work with some of the leading brands nationwide and we innovate how brands connect with and convert customers. Job Description This position offers a hands-on, technical opportunity as a vital member of the Site Reliability Engineering Group. Our SRE team is dedicated to ensuring that our Cloud platform operates seamlessly, efficiently, and reliably at scale. The ideal candidate will bring over five years of experience managing cloud-based Big Data solutions, with a strong commitment to resolving operational challenges through automation and sophisticated software tools. Candidates must uphold a high standard of excellence and possess robust communication skills, both written and verbal. A strong customer focus and deep technical expertise in areas such as Linux, automation, application performance, databases, load balancers, networks, and storage systems are essential. Key Responsibilities: As a Session AI SRE, you will: Design and implement solutions that enhance the availability, performance, and stability of our systems, services, and products Develop, automate, and maintain infrastructure as code for provisioning environments in AWS, Azure, and GCP Deploy modern automated solutions that enable automatic scaling of the core platform and features in the cloud Apply cybersecurity best practices to safeguard our production infrastructure Collaborate on DevOps automation, continuous integration, test automation, and continuous delivery for the Session AI platform and its new features Manage data engineering tasks to ensure accurate and efficient data integration into our platform and outbound systems Utilize expertise in DevOps best practices, shell scripting, Python, Java, and other programming languages, while continually exploring new technologies for automation solutions Design and implement monitoring tools for service health, including fault detection, alerting, and recovery systems Oversee business continuity and disaster recovery operations Create and maintain operational documentation, focusing on reducing operational costs and enhancing procedures Demonstrate a continuous learning attitude with a commitment to exploring emerging technologies Preferred Skills: Experience with cloud platforms like AWS, Azure, and GCP, including their management consoles and CLI Proficiency in building and maintaining infrastructure on: AWS using services such as EC2, S3, ELB, VPC, CloudFront, Glue, Athena, etc Azure using services such as Azure VMs, Blob Storage, Azure Functions, Virtual Networks, Azure Active Directory, Azure SQL Database, etc GCP using services such as Compute Engine, Cloud Storage, Cloud Functions, VPC, Cloud IAM, BigQuery, etc Expertise in Linux system administration and performance tuning Strong programming skills in Python, Bash, and NodeJS In-depth knowledge of container technologies like Docker and Kubernetes Experience with real-time, big data platforms including architectures like HDFS/Hbase, Zookeeper, and Kafka Familiarity with central logging systems such as ELK (Elasticsearch, LogStash, Kibana) Competence in implementing monitoring solutions using tools like Grafana, Telegraf, and Influx Benefits Comparable salary package and stock options Opportunity for continuous learning Fully sponsored EAP services Excellent work culture Opportunity to be an integral part of our growth story and grow with our company Health insurance for employees and dependents Flexible work hours Remote-friendly company Show more Show less

Posted 17 hours ago

Apply

12.0 - 15.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Objective: Directly supervises all front office personnel and ensures proper completion of all front office duties. Directs and coordinates the activities of the front desk, reservations, guest services, and telephone areas. Prepare monthly reports and budget for the front office department. Mumbai Based candidate only. Duties & Responsibilities: Trains, cross–trains, and retrains all front office personnel. Participates in the selection of front office personnel. Maintains working relationships and communicates with all departments. Maintains master key control. Verifies that accurate room status information is maintained and properly communicated. Resolves guest problems quickly, efficiently, and courteously. Updates group information. Maintains, monitors, and prepares group requirements. Relays information to appropriate personnel. Works within the allocated budget for the front office. Receives information from the previous shift manager and passes on pertinent details to the incoming manager. Checks cashiers in and out and verifies banks and deposits at the end of each shift. Enforces all cash handling, check-cashing, and credit policies. Upholds the hotel’s commitment to hospitality. Prepare performance reports related to the front office. Maximize room revenue and occupancy by reviewing status daily. Analyze rate variance, monitor credit reports and maintain close observation of daily room count. Monitor the selling status of the rooms daily. I.e. flash report, allowance etc. Monitor high-balance guests and take appropriate action. Ensure implementation of all hotel policies. Operate all aspects of the Front Office computer system, including software maintenance, report generation and analysis, and simple configuration changes. Prepare revenue and occupancy forecasting. Ensure logging and delivery of all messages, packages, and mail in a timely and professional manner. Ensure that Front Office staff are, at all times, attentive, friendly, helpful and courteous to all guests, managers and other employees. Monitor all V.I.P.’s special guests and requests. Maintain the required pars level of all front office and stationery supplies. Review daily front office work and activity reports generated by Night Audit Review Front Office log book and guest feedback forms on a daily basis. Perform other duties as requested by management. EDUCATION: Bachelor In Hotel Management is Must two-year college degree. IDS software Knowledge is Mandatory. EXPERIENCE: 12 to 15 year of experience of hotel front desk managerial and supervisory experience, Experience in handling cash, accounting procedures, and general administrative tasks of Front Office. Show more Show less

Posted 17 hours ago

Apply

0 years

0 Lacs

Raipur, Chhattisgarh, India

On-site

Linkedin logo

Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less

Posted 17 hours ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

WHAT YOU’LL DO: Hands-on quality assurance testing of user-focused web and responsive web applications, as well as the backend systems and infrastructure that support these experiences; advocating for clean code, testing, process refinements, and continuous improvements Collaborate with a team of engineers to plan, develop and maintain robust testing strategies, covering both manual testing and automation testing Define and communicate an accurate web test strategy in order to provide both qualitative and quantitative status/defect reports within a timely fashion Perform feature-based requirements gathering, test execution, regression testing, and functional system testing Perform feature-based requirements gathering, test execution, regression testing, and functional system testing Be actively involved in release cycles by contributing to scope planning, test effort estimation and release sign-off Complete assigned tasks in a timely manner within project constraints Cultivate a collaborative working environment and a culture of technical ownership WHAT YOU’LL NEED: 5 years minimum experience in quality assurance manual and automation testing, specializing in responsive web, mobile, and custom applications Strong in test scenarios, test case development, testing methodologies, planning and execution of stories in an agile environment. Experience in building and designing test cases, with examples of strong creative problem-solving, documentation, and communication Passionate for web and mobile accessibility and security, as well as knowledge of the latest guidelines and standards of WCAG 2.1 AA+ Hands-on experience using project management tools (JIRA & Confluence) for logging defects, creating test plans, creating test cases and reports Hands-on experience in mobile application testing across iOS and Android platforms, including functional, UI/UX, regression, and compatibility testing on real devices and emulators. Proficiency in using mobile automation tools such as Appium and Selenium, with the ability to write and maintain robust, scalable test scripts integrated into CI/CD pipelines. Strong experience leveraging test automation tools such as Selenium, Playwright, Cypress, Katalon, and Robot Framework Experience with unit testing frameworks (Jest, Mocha, PHP Unit, JUnit), load testing tools (JMeter, Blazemeter, ab, siege, etc), API testing tools (Postman) and continuous integration tools (Jenkins, CircleCI, Travis, GitHub Actions, etc) Familiarity with reading and understanding HTML, CSS, Javascript, TypeScript, PHP Python, Java Highly aware of modern optimization & performance techniques: detecting and correcting memory usage issues, as well as optimizing code for application performance Act as a product evangelist with a deep curiosity about technology trends Clear and articulate communication, positive attitude, and commitment to delivering quality work Self-motivated and focused on achieving excellence as a team Show more Show less

Posted 17 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics – Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage. Show more Show less

Posted 17 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 5 years Location: Hyderabad JobType: full-time We are seeking a highly skilled and motivated Azure DevOps Engineer with 5 to 8 years of hands-on experience to join our growing engineering team. In this role, you will be responsible for designing, implementing, and maintaining scalable and reliable DevOps solutions within the Microsoft Azure ecosystem. You will play a key role in enabling seamless development, testing, and deployment pipelines that empower our development teams to deliver high-quality software efficiently. This role demands deep expertise in Azure DevOps , GitHub , Infrastructure as Code (IaC) , CI/CD pipelines , Docker , and Kubernetes . You will work closely with software engineers, architects, and product managers to streamline development workflows, ensure system reliability, and uphold industry-leading DevOps practices. Requirements Key Responsibilities: DevOps Implementation: Design, develop, and maintain end-to-end DevOps solutions within the Azure DevOps ecosystem, ensuring seamless integration with existing tools and environments. CI/CD Pipeline Management: Build and manage scalable CI/CD pipelines using Azure DevOps and GitHub Actions to enable rapid and secure delivery of applications across multiple environments. Infrastructure as Code (IaC): Implement and maintain infrastructure using tools like ARM templates, Terraform, or Bicep to ensure repeatability and consistency across environments. Containerization & Orchestration: Develop and manage Docker containers and orchestrate them using Kubernetes in Azure Kubernetes Service (AKS) to support microservices architecture. Source Control & Repository Management: Oversee and manage Git repositories, branching strategies, and access controls on GitHub and Azure Repos. Monitoring & Security: Implement monitoring, logging, and security best practices across CI/CD pipelines and infrastructure to ensure observability and compliance. Collaboration & Support: Collaborate with development and QA teams to troubleshoot build and deployment issues, provide DevOps expertise, and ensure high system availability. Required Skills & Qualifications: 5-8 years of experience in DevOps, with at least 3 years focused on Azure DevOps and Azure cloud infrastructure. Strong proficiency in the Azure DevOps ecosystem, including Boards, Repos, Pipelines, Test Plans, and Artifacts. Solid experience with GitHub, Git workflows, and GitHub Actions. Deep understanding of CI/CD pipeline design, automation, and implementation. Proven experience in Infrastructure as Code (IaC) using tools such as Terraform, ARM templates, or Bicep. Strong knowledge of Docker and container orchestration tools like Kubernetes (preferably in Azure Kubernetes Service). Familiarity with agile methodologies and DevSecOps principles. Excellent problem-solving, communication, and collaboration skills. Nice to Have: Azure certifications (e.g., AZ-400, AZ-104). Experience with monitoring tools such as Azure Monitor, Prometheus, or Grafana. Knowledge of scripting languages (PowerShell, Bash, or Python) Show more Show less

Posted 17 hours ago

Apply

7.0 years

40 Lacs

India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 18 hours ago

Apply

7.0 years

40 Lacs

India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 18 hours ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Back-End Engineer – Go + PostgreSQL (Contract) Core Skills (“Must-Have”) Golang expertise Idiomatic Go 1.21+, goroutines / channels, std-lib HTTP & sql packages, context-aware code Relational-data mastery Hands-on with PostgreSQL 13+ — schema design, indexes, migrations (Flyway, Goose, or pg-migrate) Comfortable writing performant SQL and debugging query plans API craftsmanship Design and version REST/JSON (or gRPC) endpoints; enforce contract tests and backward compatibility Quality & Dev-Ops hygiene Unit + integration tests (Go test / Testcontainers), GitHub Actions or similar CI, Docker-ised local setup Observability hooks (Prometheus metrics, structured logging, Sentry) Collaboration fluency Pair daily with React front-end & designers; discuss payloads, edge cases, and rollout plans up front Day-to-Day Responsibilities Ship incremental data-model and API updates — e.g., add a column with default values, write safe up/down migrations, expose the field in existing endpoints, and coordinate UI changes Design small new features such as derived “metric-health” tables or aggregated views that power dashboards Guard performance & reliability — run load tests, add indexes, set query timeouts, and handle graceful fallbacks behind feature flags Keep codebase clean — review PRs, refactor shared helpers, and prune dead code as product evolves Nice-to-Have Extras Production experience with a feature-flag SDK (LaunchDarkly, Split, etc.) to stage database changes safely Familiarity with event streaming (Kafka / NATS) or background job runners (Go workers, Sidekiq-like queues) Exposure to container orchestration (Kubernetes, ECS) and infrastructure-as-code (Terraform, Pulumi) Sample Mini-Projects You Might Tackle Scenario: Add property to existing entity Write migration to add source_type column to metrics, backfill with default, update GET/POST /metrics handlers & swagger docs, unit-test both happy & error paths Scenario: New aggregated view Create new table metric_health that rolls up pass/fail counts per metric, expose /metrics/{id}/health endpoint returning red/amber/green status with pagination, instrument with Prometheus counters Show more Show less

Posted 18 hours ago

Apply

7.0 years

40 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 18 hours ago

Apply

7.0 years

40 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 18 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies