Jobs
Interviews

1730 Aggregation Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

3 - 3 Lacs

India

On-site

MIS Executive- Job Description The MIS function is critical to get a realistic view of operational quality of the Centres, including the Bharosa Centres. The MIS function will be responsible for data collection with respect to Attendance, Expenditure, Feedback Gathering exercises, Data dump from various software applications etc, for review and analytics. Hands-on experience with MS Excel is absolutely must for this function. Exposure to webforms and Business Analytics Tools will be a great advantage. Criteria Requirement Qualification: Any Graduate with 3 years of experience in MIS function (and not Data Entry) with experience in collection and aggregation of large volume data from multiple stakeholders. Gender: Any Experience: Experience in Report Preparation in Excel Experience in making PPTs using various Charts and Tables Experience in supporting accounts team in claims and reimbursements Management etc. Skills: Proficient in MS Excel Pivot Table Pivot Chart Text to Column Advanced Find and Replace HLookup V Look Up Data Cleaning Web Forms Creating online forms for data collection, ex: google forms Understanding of various kinds of fields and their usages. Live reports generation through Google Sheets etc. MS Word Mail Merge Function Accounting Helping in passing of accounting entries in Tally or Focus Maintaining books of records, including month-wise year-wise payment vouchers, petty cash register, etc. Office Basics Folder Management Version Management of important files Auto Back Up to cloud etc. Advanced search and find of files Preparing periodic statements Languages: Should be fluent in Telugu & English. Should have working knowledge of Hindi. Location: The MIS Executive will be working out of Bharosa CDEW PMU (Bharosa Society) located in SHE Teams headquarters, Women Safety Wing, Lakdi Ka Pul, Hyderabad. There will be intra-city and occasional inter-city travel within the state requirements for project implementation and monitoring purposes for which commute will be provided as and when the need arises. Compensation - The remuneration payable is INR 30 thousand per month, with an annual appreciation of not less than 5% based on performance and continuance of project The tenure of the contract: is co-terminus with project funding. The initial contract will be for 1 year. Annual performance appraisals will determine continuity of service, year on year. Application Process: All applications will be accepted online ONLY. Link to application form (mandatory to submit): https://womensafetywing.telangana.gov.in/careers/mis-executive/ Selection Process: Online Application -> Application Scrutiny -> Short Assignment -> Technical Assessments -> Personal Interview -> Reference Check -> Offer -> Joining. [Note: All steps up to Ref Check are eliminatory in nature]. The position will report to the designated senior officer in Women Safety Wing and dotted line reporting to the three commissioners under whose jurisdiction the new centres are coming up. Job Type: Full-time Pay: ₹30,000.00 - ₹31,000.00 per month Benefits: Provident Fund Schedule: Fixed shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Lakdi ka pul, Hyderabad - 500004, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Did you fill the application form provided in the Job Description? Education: Bachelor's (Required) Experience: total work: 3 years (Required) Application Deadline: 01/08/2025 Expected Start Date: 16/07/2025

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. 24x7 monitoring and detection of known security threats and attacks Real time Email notifications for non-investigated alerts Notification will include details of incident and response measures. Opening of Incident ticket in customer ITSM tool for non-investigated alerts Standard Daily Automated Reports to distribution lists (- Weekly Summary Reports -) Daily and Weekly Reports will be provided Regular updates to existing use cases Addition of new use cases based on new global threats and inputs from customer Documentation of Use cases including conditions, detection logic Analysis Run-books for use cases Response to service requests for additional logs, filtering and aggregation of log data Change management process of client to be followed for SIEM changes Quarterly session for fine tuning use cases and reports Workplace type : On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi Cantonment, Delhi, India

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. 24x7 monitoring and detection of known security threats and attacks Real time Email notifications for non-investigated alerts Notification will include details of incident and response measures. Opening of Incident ticket in customer ITSM tool for non-investigated alerts Standard Daily Automated Reports to distribution lists (- Weekly Summary Reports -) Daily and Weekly Reports will be provided Regular updates to existing use cases Addition of new use cases based on new global threats and inputs from customer Documentation of Use cases including conditions, detection logic Analysis Run-books for use cases Response to service requests for additional logs, filtering and aggregation of log data Change management process of client to be followed for SIEM changes Quarterly session for fine tuning use cases and reports Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 2 weeks ago

Apply

6.0 years

6 - 6 Lacs

Noida

On-site

Data Engineering – Technical Lead About Us: Paytm is India’s leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Netbanking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks’ financial instruments. To further enhance merchants’ business, Paytm offers merchants commerce services through advertising and Paytm Mini app store. Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the Role: This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6+ years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. Make changes to our diagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline for Data ingestion for real-time dash- boarding.Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design & implement new components and various emerging technologies in Hadoop Eco- System, and successful execution of various projects. Be a brand ambassador for Paytm – Stay Hungry, Stay Humble, Stay Relevant! Skills that will help you succeed in this role: Fluent with Strong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc.Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra.Hands-on programming experience with multithreaded applications.Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming, Flink, Storm, etc.Good to have experience with AWS and cloud technologies such as S3 Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that.You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve.You should work with us if you think seriously about what technology can do for people.We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 2 weeks ago

Apply

6.0 years

2 - 6 Lacs

Noida

On-site

Foxit is remaking the way the world interacts with documents through advanced PDF technology and tools. We are a leading global software provider of fast, affordable, and secure PDF solutions that are used by millions of people worldwide. Winner of numerous awards, Foxit has customers in more than 200 countries and global operations. We have a complete product line and an exciting and aggressive development schedule. Our proven PDF technology is disrupting the status quo establishment and has accelerated our company growth. We are proud to list as customers Google, Amazon, and NASDAQ, and with your skills and help, we plan to add many more. Foxit has offices all over the world, including locations in the US, Asia, Europe, and Australia. For more information, please visit https://www.foxit.com. Role Overview: As a DevOps Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities: Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications: Technical Skills: Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experiences Required: Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, with hands on building and maintaining cloud infrastructure using AWS and/or Azure and global software deployment practices. Experience designing and deploying microservices architectures using Docker and Kubernetes (EKS/AKS). Strong background in Infrastructure as Code (IaC) using Terraform, with scripting in Python/Bash for automation. Experience working with databases (MySQL, MongoDB, Redis) in high-concurrency environments. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred to have: Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida

On-site

About Us: Paytm is India’s leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Net banking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks’ financial instruments. To further enhance merchants’ business, Paytm offers merchants commerce services through advertising and Paytm Mini app store. Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the team: Paytm Ads is digital advertising vertical that offers innovative ad solutions to clients across industries It offers advertisers the opportunity to engage with 300Mn+ users who interact with over 200 payment; retail services, online and offline - offered on the Paytm app. Paytm Ads maps the user transactions to their lifestyle choices and creates customized segmentation cohorts for sharp shooting ad campaigns to the most relevant TG. Expectations/ Requirements 1.Proficient in SQL/Hive and deep expertise in building scalable business reporting solutions 2. Past experience in optimizing business strategy, product or process using data & analytics 3. Working knowledge in at least one programming language like Scala, Java or Python 4. Working knowledge of Dashboard visualization. Ability to execute cross functional initiatives. 5. Maintaining product & funnel dashboard7.s, metrics on pulse, looker, superset 6.Campaign analytics and debugs 7.Data reporting for business asks, MBR, Lucky wheel revenue, growth experiments Superpowers/ Skills that will help you succeed in this role 1. 5 to 9 years of work experience in a business intelligence and analytics role in financial services, e-commerce, consulting or technology domain 2. Demonstrated ability to directly partner with business owners to understand product requirements 3. Effective spoken and written communication to senior audiences, including strong data presentation and visualization skills 4. Prior success in working with extremely large datasets using big data technologies 5. Detail-oriented, with an aptitude for solving unstructured problems Why join us A collaborative output driven program that brings cohesiveness across businesses through technology A solid 360 feedbacks from your peer teams on your support of their goals With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. 24x7 monitoring and detection of known security threats and attacks Real time Email notifications for non-investigated alerts Notification will include details of incident and response measures. Opening of Incident ticket in customer ITSM tool for non-investigated alerts Standard Daily Automated Reports to distribution lists (- Weekly Summary Reports -) Daily and Weekly Reports will be provided Regular updates to existing use cases Addition of new use cases based on new global threats and inputs from customer Documentation of Use cases including conditions, detection logic Analysis Run-books for use cases Response to service requests for additional logs, filtering and aggregation of log data Change management process of client to be followed for SIEM changes Quarterly session for fine tuning use cases and reports Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

4 - 6 Lacs

Indore

On-site

Company Overview Alphanext is a global talent solutions company with offices in London, Pune, and Indore. We connect top-tier technical talent with forward-thinking organizations to drive innovation and transformation through technology. Role Overview We are looking for an experienced Sr. Azure DevOps Engineer with 5–8 years of hands-on experience. The ideal candidate will have deep expertise in Azure, Kubernetes, Terraform, and OpenShift, with a strong foundation in CI/CD automation, infrastructure engineering, and cloud operations. This role is pivotal to designing scalable systems and streamlining application deployment processes across cloud environments. You will collaborate closely with software, data, and security teams to ensure infrastructure scalability, system reliability, and secure operations. Key Responsibilities Infrastructure & Automation Design, implement, and manage Infrastructure as Code (IaC) using Terraform with modular and reusable practices. Automate end-to-end build, test, and deployment pipelines using Azure DevOps. Maintain scalable, resilient, and cost-optimized cloud infrastructure in Azure. Containerization & Orchestration Manage Kubernetes clusters and deploy containerized applications using Helm charts and Azure DevOps Pipelines. Support OpenShift platforms for enterprise-grade container orchestration and hybrid cloud scenarios. Work with Docker for container build, runtime, and lifecycle management. Monitoring & Observability Set up and maintain monitoring, alerting, and log aggregation using Elastic Stack (Metricbeat, Filebeat, Kibana) or equivalent tools. Ensure proactive infrastructure health checks and performance optimization. Scripting & Workflow Automation Automate tasks using Python, Bash, PowerShell, or Ansible to reduce manual effort and improve reliability. Develop reusable scripts and playbooks to streamline operational workflows. Security & Compliance Work with cybersecurity teams to enforce encryption, access control, and secrets management. Ensure infrastructure meets compliance and audit requirements through automation and controls. Troubleshooting & Support Provide L2/L3 support for infrastructure, deployments, and application performance issues. Lead root cause analysis and incident response efforts. Documentation & Collaboration Maintain up-to-date documentation for infrastructure architecture, CI/CD workflows, and operational procedures. Collaborate across cross-functional teams to ensure alignment and knowledge sharing. Must-Have Skills 5–8 years of hands-on experience in DevOps engineering roles. Deep proficiency in Azure services and Azure DevOps for CI/CD. Hands-on experience with Kubernetes and Docker in production environments. Strong experience with Terraform and Ansible for IaC and automation. Proficiency in scripting languages such as Python, Bash, or PowerShell. Working knowledge of Elastic Stack or similar observability tools. Experience with Git, Git Flow, and managing code repositories in CI/CD pipelines. Nice-to-Have Skills Experience with OpenShift and hybrid cloud environments. Familiarity with data operations: ETL/ELT workflows, data modeling, and data lakes. Azure DevOps or relevant cloud certifications (preferred). Exposure to DevSecOps practices and integrating security into the CI/CD lifecycle. Education & Certifications Bachelor//'s degree in Computer Science, Engineering, or a related technical field. Azure DevOps Engineer certification or equivalent cloud credentials (preferred).

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Support an equity research analyst of a US-based sell-side firm tracking the US insurance sector: Build end-to-end financial models for initiating coverage; update models for earnings and events Contribute to initiation notes, earnings notes and other notes Search and aggregation related to sector and companies Provide data and analysis based on various client requests Build and update sector databases Prepare and update marketing presentations Work on wall-crosses assignments Use data sources such as Factset and Bloomberg Role requires: Keen understanding of financial analysis Excellent knowledge of accounting and valuation concepts Excellent MS-Excel skills Ability to write research reports Excellent client management and communication Ability to think and work independently Strong time management skills Experience: Candidates with at least 2-3 years of experience in equity research, esp. those who have covered the insurance/insurance tech. sector would be preferred. Suitable candidates from other sectors will also be considered

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Greater Bengaluru Area

On-site

Key Responsibilities: Manage a QA team of 6 engineers focused on validating data pipelines, APIs, and front-end applications. Define, implement, and maintain test automation strategies for: ETL workflows (Airflow DAGs) API contracts, performance, and data sync UI automation for internal and external portals Collaborate with Data Engineering and Product teams to ensure accurate data ingestion from third-party systems such as Workday, ADP, Greenhouse, Lever, etc. Build and maintain robust automated regression suites for API and UI layers using industry-standard tools. Implement data validation checks, including row-level comparisons, schema evolution testing, null/missing value checks, and referential integrity checks. Own and evolve CI/CD quality gates and integrate automated tests into GitHub Actions, Jenkins, or equivalent. Ensure test environments are reproducible, version-controlled, and equipped for parallel test execution. Mentor QA team members in advanced scripting, debugging, and root-cause analysis practices. Develop monitoring/alerting frameworks for data freshness, job failures, and drift detection using Airflow and observability tools. Technical Skills: Core QA & Automation: Strong hands-on experience with Selenium, Playwright, or Cypress for UI automation. Deep expertise in API testing using Postman, REST-assured, Karate, or similar frameworks. Familiar with contract testing using Pact or similar tools. Strong understanding of BDD/TDD frameworks (e.g., Pytest-BDD, Cucumber). ETL / Data Quality: Experience testing ETL pipelines, preferably using Apache Airflow. Hands-on experience with SQL and data validation tools such as: Great Expectations Custom Python data validators Understanding of data modeling, schema versioning, and data lineage. Languages & Scripting: Strong programming/scripting skills in Python (required), with experience using it for test automation and data validations. Familiarity with Bash, YAML, and JSON for pipeline/test configurations. DevOps & CI/CD: Experience integrating tests into pipelines using tools like GitHub Actions, Jenkins, CircleCI, or GitLab CI. Familiarity with containerized environments using Docker and possibly Kubernetes. Monitoring & Observability: Working knowledge of log aggregation and monitoring tools like Datadog, Grafana, Prometheus, or Splunk. Experience with Airflow monitoring, job-level metrics, and alerts for test/data failures. Qualifications : 15+ years in QA/QE roles with 3+ years in a leadership or management capacity. Strong foundation in testing data-centric and distributed systems. Proven ability to define and evolve automation strategies in agile environments. Excellent analytical, communication, and organizational skills. Preferred: Experience with data graphs, knowledge graphs, or employee graph modeling. Exposure to cloud platforms (AWS/GCP) and data services (e.g., S3, BigQuery, Redshift). Familiarity with HR tech domain and integration challenges.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Primary responsibility is to provide support and manage network infrastructure. Design, and implementation of new sites for Business. This role will require in depth knowledge of Aruba Central, Aruba switches, wireless deployment, advanced Wireless and Switching. Responsibilities- Solid experience on configuring, deploying, installing, and troubleshooting Aruba/Cisco switches, Wireless APs, and Aruba central. Configuration and deployment of new Aruba switches, Wireless Controller, and other network infrastructure devices. Perform analysis and troubleshooting of the networking, wireless, and ISE related problems. Provide support to customers on various issues that are reported by users, clients, and other connecting application integrations. Perform upgrades and patching of the existing deployments like Switches, Wireless clusters or Access points. Working knowledge in the operation of Ethernet LAN switching protocols and standards including, VLANs, VLAN aggregation, Ether Channel, PVLANs, Spanning Tree & Rapid Spanning Tree, 802.1Q. Experience in Infoblox(DHCP/DNS/IPAM). Experience in Network Monitoring tools like HPE-IMC and Zabbix. Good knowledge of SNMP. Understanding and basic knowledge of 802.1X, RADIUS, TACACS+, different EAP-methods, PKI. Should understand the Palo alto/Routing basic and working. Knowledge on ITIL standards- Incident, Change & Problem Management. Exposure to Problem Solving, handling problem tickets etc. Experience with supporting/engineering/maintaining in large networks and Provide a highly professional Second level support. Should be ready to work in 24X7 Shifts and ability to work on multiple concurrent tasks.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Senior MLM Web Developer for MLM web Application ( Fintech ) MERN Full-Stack Developer Compensation: ₹45,000 -50000 Per Month Salary Start Date: June 24, 2025 Location: work from ofice About ProGain ProGain is an e-commerce rental platform with a built-in MLM referral system and fintech integrations. We’re launching an MVP targeting 2,000 agents who will buy, rent, and earn referral incentives. Role Overview: You will own the full-stack build of our MVP: designing database schemas, referral logic, payment flows, and agent dashboards. You’ll collaborate with another MERN developer, a UI/UX designer, a QA engineer, and a PM to deliver and maintain a production-ready system. Key Responsibilities: Backend (Node.js/Express/MongoDB): * Model Agents, Transactions, Config, and Contracts * Build referral-tree logic (Levels 1–3) using materialized-path or DFS * Integrate Razorpay (primary) and Stripe (fallback) with idempotency * Schedule quarterly interest payouts and top-up reminders * Send WhatsApp (Twilio/Meta) and email (SendGrid/SES) notifications Frontend (React.js): * Build Agent dashboard: referral tree, passbook (CSV), earnings calculator, KYC upload * Create Admin/Super-Admin panels: agent CRUD, bulk migration (CSV), contract templates, WhatsApp broadcasts * Implement auth flows (signup, login, OTP) and global state management (Context or Redux) DevOps & CI/CD: * Dockerize backend; set up GitHub Actions for automated linting, testing, building, and deployment * Monitor health checks; maintain MongoDB replica set with daily backups Testing & Quality: * Write unit tests (Jest/Mocha), API tests (Postman/Newman), and end-to-end tests (Cypress) * Run load tests (k6/Artillery) for 2,000 concurrent users Security: * Implement OWASP best practices (helmet, input validation, HTTPS) * Securely store KYC documents (S3 or equivalent) with proper access controls Requirements: * 5+ years building MERN-stack applications * Expertise in Node.js, Express, and MongoDB (replica sets, aggregation, transactions) * Proven Razorpay and Stripe integration experience with idempotency * Strong React.js skills (Hooks, Context/Redux, React Router) * Experience with MLM/referral systems or hierarchical data models * Familiarity with WhatsApp Business API (Twilio or Meta), SendGrid, and AWS SES * Docker and GitHub Actions (or similar CI/CD) experience * Experience scheduling background tasks (cron, AWS EventBridge, or Kubernetes CronJobs) * Excellent problem-solving, communication, and documentation skills Apply ASAP—roles will fill quickly.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description As Manager of the A&M GCC PI Supply Chain team, you would contribute as subject matter expertise (SME) across mentioned areas below: S&OP Perform current state assessments and develop future state S&OP designs Manage and lead monthly S&OP cycles – Demand, Supply, Pre-S&OP, and S&OP Create MIS and insight decks for regional/global S&OP reviews Monitor gaps vs. plan, raise escalations, and manage resolutions through collaboration with cross-functional teams (commercial, supply, finance, strategy) Product/ Portfolio Planning Coordinate phase-in/phase-out activities with marketing, supply, and manufacturing teams Lead portfolio rationalization and health analysis; implement segmentation-based forecasting and fulfilment strategies Analyse KPIs related to revenue, margin per SKU, and stock reduction opportunities Work across the product development lifecycle process Demand Planning Strong experience in end-to-end demand planning including Process design, data analysis, stat forecast modelling, consensus forecast etc. Perform demand pattern analysis and segmentation to develop right forecasting strategy Deep understanding and hands-on experience of building statistical and machine learning forecasting models, algorithms and drivers for generating forecast Experience in forecast aggregation / disaggregation at different levels, consensus forecast planning, calculating and monitoring key forecast metrics Supply Planning Drive end-to-end fulfilment and replenishment execution, including master data validation, BOM setup, and planning parameter alignment Execute capacity planning (RCCP), Master scheduling (MPS), Material planning (MRP), and production scheduling across sites, coordinate for inputs and escalation management Optimize mid- and near-term supply plans through what-if simulations, volume allocation, changeover reduction, and sequencing strategies Track and report key supply KPIs such as plan adherence, capacity utilization, and changeover performance for continuous improvement Inventory Planning Establish Inventory baseline, and perform assortment, product classification and ageing analysis Calculate inventory targets based on different stock components (e.g. Safety stock, Cycle stock, Reorder point) and key associated drivers (e.g. Lead time, Forecast accuracy etc.) Perform current state analysis and develop future state process design Identify short-term and long-term opportunities based on calculated targets vs baseline Hands on experience in Implementing identified opportunities to draw down inventory and realize value Note : The expectation is not extensive hands-on across all areas, but a blend of process understanding and functional exposure / hands-on aligned with the role’s focus Qualifications Minimum of 6-10 years of experience in Supply chain planning, consulting, or process improvement experience Previous advisory experience from a top-tier strategy firm, leading specialist, niche advisory firm, or Big-4 consultancy preferred Bachelor’s degree in engineering or a related field MBA / master’s degree in supply chain & Operations Management, Logistics, Business Administration, or a related field Strong storyboarding skills to translate operational insights into impactful client presentations Excellent communication and interpersonal skills with high motivation to learn and grow Ability to simultaneously work on several projects and effectively manage deadlines Experience in KPI tracking (Forecast accuracy, Bias, Capacity utilization, Plan Adherence, changeover optimization, Invt turns etc.) and performance dashboards Strong cross-functional collaboration and stakeholder engagement experience (product, demand, manufacturing, logistics, etc.) Hands-on experience with planning optimization tools (e.g., Kinaxis, 09, BY etc.) is preferred Detail-oriented and possess strong organizational skills to excel in a deadline-driven environment Operational experience in running end-to-end S&OP cycles, exposure to business KPIs and leadership reviews Hands-on with cross-functional cadence & stakeholder management Excellent fact-gathering and analytical skills, including business process mapping and quantitative analysis Tools knowledge on Alteryx, Power BI, python, Linear programming platforms, etc. would be a big plus APICS CSCP, CPIM, CLTD certifications would be a plus Inclusive Diversity A&M’s entrepreneurial culture celebrates independent thinkers and doers who can positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M’s core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are the main reasons our people love working at A&M. Inclusive Diversity means we embrace diversity, and we foster inclusiveness, encouraging everyone to bring their whole self to work each day. It runs through how we recruit, develop employees, conduct business, support clients, and partner with vendors. It is the A&M way. Equal Opportunity Employer It is Alvarez & Marsal’s practice to provide and promote equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination because of race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity or gender expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, family medical history, genetic information or other protected medical condition, political affiliation, or any other characteristic protected by and in accordance with applicable laws. Employees and Applicants can find A&M policy statements and additional information by region here. Unsolicited Resumes from Third-Party Recruiters Please note that as per A&M policy, we do not accept unsolicited resumes from third-party recruiters unless such recruiters are engaged to provide candidates for a specified opening. Any employment agency, person or entity that submits an unsolicited resume does so with the understanding that A&M will have the right to hire that applicant at its discretion without any fee owed to the submitting employment agency, person or entity.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description As Senior Associate of the A&M GCC PI Supply Chain team, you would drive the activities mentioned across the key areas below: S&OP Support current state assessment exercises and assist in designing future state S&OP processes Participate in monthly S&OP cycles – Demand, Supply, Pre-S&OP, and S&OP by preparing inputs, coordinating data, and supporting meeting cadence Create MIS reports and insights decks for internal reviews and client discussions Track plan vs. actuals, highlight key gaps, and support resolution efforts in collaboration with cross-functional teams (commercial, supply, finance, strategy) Performed Current state assessment and developed future state S&OP design Managed / Led monthly S&OP cycle – Demand, Supply, Pre-S&OP and S&OP Created MIS and insights decks for regional/global S&OP reviews Monitored gaps vs plan, highlighted escalations, and managed resolutions through collaboration with cross functional teams (commercial, supply, finance, strategy teams) Product/ Portfolio Planning Coordinate with marketing, supply, and manufacturing teams on phase-in/phase-outs/NPIs Assist in portfolio rationalization and segmentation for forecasting and fulfilment strategies Monitor KPIs related to SKU performance, stock levels, and contribution margins Support product lifecycle data analysis Coordinated phase-in/phase-out with marketing, supply, and manufacturing Led portfolio rationalization and health analysis; segmentation-based forecasting and fulfilment strategy implementation Analysis of KPIs related to Revenue, Margin/SKU, stock reduction opportunity etc. Experience in Product development lifecycle process Demand Planning Support end-to-end demand planning activities including data cleaning, statistical modelling, and scenario simulation Perform demand trend analysis under guidance to support forecast development Work with models, algorithms, and tools to generate baseline statistical forecasts and refine using business inputs Assist in forecast aggregation/disaggregation and preparation of key forecast performance metrics Strong experience in end-to-end demand planning including Process design, data analysis, stat forecast modelling, consensus forecast etc. Performed demand pattern analysis and segmentation to develop right forecasting strategy Deep understanding and hands-on experience of building statistical and machine learning forecasting models, algorithms and drivers for generating forecast Experience in forecast aggregation / disaggregation at different levels, consensus forecast planning, calculating and monitoring key forecast metrics Supply Planning Support fulfilment and replenishment activities by validating planning master data (e.g., BOMs, lead times) and coordinating parameter setups Assist in executing RCCP, MPS, MRP, and production scheduling activities across sites Run simulations and optimization scenarios to support mid- and short-term supply planning Track and report supply-side KPIs (plan adherence, utilization, changeover metrics) to enable continuous improvement Drive end-to-end fulfilment and replenishment execution, including master data validation, BOM setup, and planning parameter alignment Execute capacity planning (RCCP), Master scheduling (MPS), Material planning (MRP), and production scheduling across sites, coordinate for inputs and escalation management Optimize mid- and near-term supply plans through what-if simulations, volume allocation, changeover reduction, and sequencing strategies Track and report key supply KPIs such as plan adherence, capacity utilization, and changeover performance for continuous improvement Inventory Planning Support inventory baseline creation through data extraction and analysis of SKU classification and ageing Assist in setting and reviewing inventory targets (e.g., safety stock, reorder point) using lead time and forecast accuracy inputs Conduct data analysis for current inventory performance and process mapping for future state recommendations Contribute to opportunity identification and action tracking for inventory optimization initiatives Establish Inventory baseline, and perform assortment, product classification and ageing analysis Calculate inventory targets based on different stock components (e.g. Safety stock, Cycle stock, Reorder point) and key associated drivers (e.g. Lead time, Forecast accuracy etc.) Perform current state analysis and develop future state process design Identify short-term and long-term opportunities based on calculated targets vs baseline Hands on experience in Implementing identified opportunities to draw down inventory and realize value Note : The expectation is not extensive hands-on across all areas, but a blend of process understanding and functional exposure / hands-on aligned with the role’s focus Qualifications Minimum of 4-7 years of experience in Supply chain planning, consulting, or process improvement experience Previous advisory experience from a top-tier strategy firm, leading specialist, niche advisory firm, or Big-4 consultancy preferred Bachelor’s degree in engineering or a related field MBA / master’s degree in supply chain & Operations Management, Logistics, Business Administration, or a related field Excellent communication and interpersonal skills with high motivation to learn and grow Ability to simultaneously work on several projects and effectively manage deadlines Experience in KPI tracking (NPI/Phase-in, phase-out management, forecast accuracy, Bias, Capacity utilization, Plan Adherence, changeover optimization, days over, Inventory turns, etc.) and performance dashboards Strong cross-functional collaboration and stakeholder engagement experience (product, demand, manufacturing, logistics, etc.) Hands-on experience with planning optimization tools (e.g., Kinaxis, 09, BY etc.) is preferred Detail-oriented and possess strong organizational skills to excel in a deadline-driven environment Operational experience in running end-to-end S&OP cycles, exposure to business KPIs and leadership reviews Hands-on with cross-functional cadence & stakeholder management Excellent fact-gathering and analytical skills, including business process mapping and quantitative analysis Tools knowledge on Alteryx, Power BI, python, Linear programming platforms, etc. would be a big plus APICS CSCP, CPIM, CLTD certifications would be a plus Inclusive Diversity A&M’s entrepreneurial culture celebrates independent thinkers and doers who can positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M’s core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are the main reasons our people love working at A&M. Inclusive Diversity means we embrace diversity, and we foster inclusiveness, encouraging everyone to bring their whole self to work each day. It runs through how we recruit, develop employees, conduct business, support clients, and partner with vendors. It is the A&M way. Equal Opportunity Employer It is Alvarez & Marsal’s practice to provide and promote equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination because of race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity or gender expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, family medical history, genetic information or other protected medical condition, political affiliation, or any other characteristic protected by and in accordance with applicable laws. Employees and Applicants can find A&M policy statements and additional information by region here. Unsolicited Resumes from Third-Party Recruiters Please note that as per A&M policy, we do not accept unsolicited resumes from third-party recruiters unless such recruiters are engaged to provide candidates for a specified opening. Any employment agency, person or entity that submits an unsolicited resume does so with the understanding that A&M will have the right to hire that applicant at its discretion without any fee owed to the submitting employment agency, person or entity.

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Lakdi ka pul, Hyderabad, Telangana

On-site

MIS Executive- Job Description The MIS function is critical to get a realistic view of operational quality of the Centres, including the Bharosa Centres. The MIS function will be responsible for data collection with respect to Attendance, Expenditure, Feedback Gathering exercises, Data dump from various software applications etc, for review and analytics. Hands-on experience with MS Excel is absolutely must for this function. Exposure to webforms and Business Analytics Tools will be a great advantage. Criteria Requirement Qualification: Any Graduate with 3 years of experience in MIS function (and not Data Entry) with experience in collection and aggregation of large volume data from multiple stakeholders. Gender: Any Experience: Experience in Report Preparation in Excel Experience in making PPTs using various Charts and Tables Experience in supporting accounts team in claims and reimbursements Management etc. Skills: Proficient in MS Excel Pivot Table Pivot Chart Text to Column Advanced Find and Replace HLookup V Look Up Data Cleaning Web Forms Creating online forms for data collection, ex: google forms Understanding of various kinds of fields and their usages. Live reports generation through Google Sheets etc. MS Word Mail Merge Function Accounting Helping in passing of accounting entries in Tally or Focus Maintaining books of records, including month-wise year-wise payment vouchers, petty cash register, etc. Office Basics Folder Management Version Management of important files Auto Back Up to cloud etc. Advanced search and find of files Preparing periodic statements Languages: Should be fluent in Telugu & English. Should have working knowledge of Hindi. Location: The MIS Executive will be working out of Bharosa CDEW PMU (Bharosa Society) located in SHE Teams headquarters, Women Safety Wing, Lakdi Ka Pul, Hyderabad. There will be intra-city and occasional inter-city travel within the state requirements for project implementation and monitoring purposes for which commute will be provided as and when the need arises. Compensation - The remuneration payable is INR 30 thousand per month, with an annual appreciation of not less than 5% based on performance and continuance of project The tenure of the contract: is co-terminus with project funding. The initial contract will be for 1 year. Annual performance appraisals will determine continuity of service, year on year. Application Process: All applications will be accepted online ONLY. Link to application form (mandatory to submit): https://womensafetywing.telangana.gov.in/careers/mis-executive/ Selection Process: Online Application -> Application Scrutiny -> Short Assignment -> Technical Assessments -> Personal Interview -> Reference Check -> Offer -> Joining. [Note: All steps up to Ref Check are eliminatory in nature]. The position will report to the designated senior officer in Women Safety Wing and dotted line reporting to the three commissioners under whose jurisdiction the new centres are coming up. Job Type: Full-time Pay: ₹30,000.00 - ₹31,000.00 per month Benefits: Provident Fund Schedule: Fixed shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Lakdi ka pul, Hyderabad - 500004, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Did you fill the application form provided in the Job Description? Education: Bachelor's (Required) Experience: total work: 3 years (Required) Application Deadline: 01/08/2025 Expected Start Date: 16/07/2025

Posted 2 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About Us At Thoucentric, we offer end-to-end consulting solutions designed to address the most pressing business challenges across industries. Leveraging deep domain expertise, cutting-edge technology, and a results-driven approach, we help organizations streamline operations, enhance decision-making, and accelerate growth. We are headquartered in Bangalore with presence across multiple locations in India, US, UK, Singapore & Australia Globally. We help clients with Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning and Execution including Analytics & Emerging Tech areas cutting across functional areas such as Supply Chain, Finance & HR, Sales & Distribution across US, UK, Singapore and Australia. Our unique consulting framework allows us to focus on execution rather than pure advisory. We are working closely with marquee names in the global consumer & packaged goods (CPG) industry, new age tech and start-up ecosystem. We have been certified as "Great Place to Work" by AIM and have been ranked as "50 Best Firms for Data Scientists to Work For". We have an experienced consulting team of over 500+ world-class business and technology consultants based across six global locations, supporting clients through their expert insights, entrepreneurial approach and focus on delivery excellence. We have also built point solutions and products through Thoucentric labs using AI/ML in the supply chain space. Job Description We are looking for an experienced Cloud Engineer with a strong foundation in cloud infrastructure, DevOps, monitoring, and cost optimization. The ideal candidate will be responsible for designing scalable architectures, implementing CI/CD pipelines, and managing secure and efficient cloud environments using AWS, GCP, or Azure. Key Responsibilities: Design and deploy scalable, secure, and cost-optimized infrastructure across cloud platforms (AWS, GCP, or Azure) Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions Set up infrastructure monitoring, alerting, and logging systems (e.g., CloudWatch, Prometheus, Grafana) Collaborate with development and architecture teams to implement cloud-native solutions Manage infrastructure security, IAM policies, backups, and disaster recovery strategies Drive cloud cost control initiatives and resource optimization Troubleshoot production and staging issues related to infrastructure and deployments Requirements Must-Have Skills: 5-7 years of experience working with cloud platforms (AWS, GCP, or Azure) Strong hands-on experience in infrastructure provisioning and automation Expertise in DevOps tools and practices, especially CI/CD pipelines Good understanding of network configurations, VPCs, firewalls, IAM, and security best practices Experience with monitoring and log aggregation tools Solid knowledge of Linux system administration Familiarity with Git and version control workflows Good to Have: Experience with Infrastructure as Code tools (Terraform, CloudFormation, Pulumi) Working knowledge of Kubernetes or other container orchestration platforms (EKS, GKE, AKS) Exposure to scripting languages like Python, Bash, or PowerShell Familiarity with serverless architecture and event-driven designs Awareness of cloud compliance and governance frameworks Benefits What a Consulting role at Thoucentric will offer you? Opportunity to define your career path and not as enforced by a manager A great consulting environment with a chance to work with Fortune 500 companies and startups alike. A dynamic but relaxed and supportive working environment that encourages personal development. Be part of One Extended Family. We bond beyond work - sports, get-togethers, common interests etc. Work in a very enriching environment with Open Culture, Flat Organization and Excellent Peer Group. Be part of the exciting Growth Story of Thoucentric! Required Skills DevOps tools, CI/CD pipel... DevOps tools +6 Practice Name Labs Date Opened 07/15/2025 Work Mode Hybrid Job Type Full time Industry Consulting Corporate Office Thoucentric, The Hive, Mahadevapura Zip/Postal Code 560048 City Bengaluru Country India State/Province Karnataka

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Engineers at Weave play a critical role in building and maintaining robust backend services. This position is central to a key project, requiring strong expertise in production-level Java. While the immediate focus is on Java , engineers will evolve to work with Go (GoLang) microservices as the project progresses. Our team collaborates across departments to coordinate efforts and is responsible for the long-term quality of the code we write and maintain, crafting reliable web services that are deployed with containers on Kubernetes. As integral members of an autonomous, cross-functional team, engineers contribute wherever needed and thrive in a high-trust environment. At Weave, engineers truly enjoy great days, almost every day! This position will be (remote in India Reports to: Senior Director of Engineering What You Will Need To Accomplish The Job 5+ years of experience with back-end languages (e.g. Go, Java, Ruby, Python, C#, etc.), with a strong emphasis on production-level Java. Willingness to learn and adapt to Go in the future. Experience building SaaS products at scale. Willing to participate in an on-call rotation with the rest of your team. Experience working with distributed systems, and inter-service communication protocols and APIs, e.g REST, protobufs/gRPC, Kafka, NSQ, etc. Experience working with relational databases and SQL. Develop and review design, functional, technical, and/or user documentation, as needed. Contribute to the design, implementation, and architecture of new or re-engineered software. Develop, test, and integrate code for new or existing software of significant complexity. Solid understanding of distributed systems and building scalable/redundant service. What Will Make Us Love You (preferred qualifications- including personality traits) Deploying into a public cloud service (especially GCP). Experience with containerization (Docker/Kubernetes). Experience with protobufs/gRPC. Experience with deployments using CI/CD, Jenkins, etc. Experience with pipeline monitoring, metrics, alerting, log aggregation, and tracing Experience with Prometheus, Grafana, DataDog, etc Weave is an equal opportunity employer that is committed to fostering an inclusive workplace where all individuals are valued and supported. We welcome anyone who is hungry to learn, problem-solve and progress regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, or other applicable legally protected characteristics. If you have a disability or special need that requires accommodation, please let us know. All official correspondence will occur through Weave branded email. We will never ask you to share bank account information, cash a check from us, or purchase software or equipment as part of your interview or hiring process.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

On-site

JOB_POSTING-3-72457-2 Job Description Role Title: Manager, Risk Data Governance, Credit Analytics (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Credit Team decisions credit actions across the lifecycle of a customer – from acquisition to account management to collections and recovery – we work towards managing credit and fraud losses and elevating customer experience through powerful and proprietary insights on customer risk and credit behaviours. The actionable insights are driven by access to numerous alternative data sources, new age technologies, focused strategies, emerging algorithms, and predictive precision. Spread across 10 pillars, the credit team in India caters to the entire gamut of decision sciences, from data management to model development to strategy design, and brings it all to life through technology, and manages within the guardrails of our regulatory requirements. As part of the team, you will have access to some unique product propositions, functional and leadership training, and interaction with the executive leadership team and a myriad of diverse perspectives. Role Summary/Purpose Manager, Risk Data Governance L9, supports credit initiatives related to Risk Data Governance (RDG) including data aggregation, architecture, data analysis, data usage. This individual will serve as a key contributor and co-lead monthly load processes, new product integrations, new data sources from a point of view of credit reporting & modeling business owner. Key Responsibilities Work closely with the 1st line business teams including Information Technology, Marketing, Credit Risk and Operations to improve and validate accuracy of existing RDG fields, through the independent research of data issues, data design logic and user acceptance tests (UAT). Assist in the onboarding of new data sources into risk aggregation data platform. Analyze differences in data, suggest data logic & mapping changes to standardize and cleanse onboarding data sources and execute user acceptance tests for new data. Run and continuously improve risk aggregation data platform, monthly load validation process and visualization reports. Support new product integration into risk aggregation data platform and converting business requirements into new reporting data fields through helping with design and testing. Required Skills/Knowledge Bachelor’s degree with a quantitative underpinning and 2+ years of work experience in data base management, data governance, analytics or in a techno-functional role or in lieu of a degree 4+ years of relevant experience 2+ years of experience: SAS or SQL Working experience in Python, Willingness to learn and expand Python skills for data management. Experience with Tableau Strong communication skills, written and verbal, in a clear, concise manner Excellent interpersonal, organizational, prioritization, time management skills Ability to drive decisions based on quantitative analysis and creative thinking Desired Skills/Knowledge Curious investigative mindset for data and its relationship to business, reporting, financial modeling. Previous experience with business intelligence/data warehousing/visualization platforms (either project management implementation or coding or both). Understanding Agile project management methods. Self-starter character with organizational and analytical skills. 2+ years of experience with ETL tools like Abinitio & Unix shell scripting. Eligibility Criteria Bachelor’s degree with a quantitative underpinning and 2+ years of work experience in data base management, data governance, analytics or in a techno-functional role or in lieu of a degree 4+ years of relevant experience Work Timings This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying. Inform your Manager or HRM before applying for any role on Workday. Ensure that your Professional Profile is updated (fields such as Education, Prior experience, Other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L4+ employees can apply. Grade/Level: 09 Job Family Group Credit

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Purpose We are looking for a Senior SQL Developer to join our growing team of BI & analytics experts. The hire will be responsible for expanding and optimizing our data and data queries, as well as optimizing data flow and collection for consumption by our BI & Analytics platform. The ideal candidate is an experienced data querying builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The SQL Developer will support our software developers, database architects, data analysts and data scientists on data and product initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple systems and products. The right candidate will be excited by the prospect of optimizing our company’s data architecture to support our next generation of products and data initiatives. Job Responsibilities Essential Functions: Requirements Create and maintain optimal SQL queries, Views, Tables, Stored Procedures. Work together with various business units (BI, Product, Reporting) to develop data warehouse platform vision, strategy, and roadmap. Understand the development of physical and logical data models. Ensure high-performance access to diverse data sources. Encourage the adoption of an organization’s frameworks by providing documentation, sample code, and developer support. Communicate progress on the adoption and effectiveness of the developed frameworks to department head and managers. Required Education And Experience Bachelor’s or Master’s degree or equivalent combination of education and experience in relevant field. Understanding of T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL and ETL Experiencing in Creating Table, Views, Stored Procedures. Understanding of several BI and Reporting Platforms, and be aware of industry trends and direction in BI/reporting and applicability to the organization’s product strategies. Skilled in multiple database platforms, including SQL Server and MySQL. Knowledgeable of Source Control and Project Management tools like Azure DevOps, Git, and JIRA Familiarity of using SonarQube for clean coding T-SQL practices. Familiarity with DevOps best practices and automation of documentation, testing, build, deployment, configuration, and monitoring Communication skills: It is vital that applicants have exceptional written and spoken communication skills with active listening abilities to contribute in making strategic decisions and advise senior management on specialized technical issues, which will have an impact on the business Strong team building skills: it is crucial that they also have team building ability to provide direction for complex projects, mentor junior team members, and communicate the organization’s preferred technologies and frameworks across development teams. Experience: A candidate for this position must have had at least 5+ years working in a data warehousing position within a fast-paced and complex business environment, working as a SQL Developer. The candidate must also have had experience developing schema data models in a data warehouse environment. The candidate will also have had experience with full implementation of system development lifecycle (SDLC). The candidate must also have a proven and successful experience working with concepts of data integration, consolidation, enrichment, and aggregation. A suitable candidate will also have a strong demonstrated understanding of dimensional modeling and similar data warehousing techniques as well as having experience working with relational or multi-dimensional databases and business intelligence architectures. Analytical Skills: As expected, a candidate for the position will have passion as well as skill in research and analytics as well as a passion for data management tools and technologies. The candidate must have an ability to perform detailed data analysis, for example, in determining the content, structure, and quality of data through the examination of data samples and source systems. The hire will additionally have the ability to troubleshoot data warehousing issues and quickly resolve them. Expected Competencies Detailed oriented with strong organizational skills Ability to pay attention to programming style and neatness Strong English communication skills, both written and verbal Ability to train, mentor junior colleagues with patience with tangible results Work Timings This is a full-time position. Days and hours of work are Monday through Friday, and should be flexible to support different time zones ranging between 12 PM IST to 9PM IST, Work schedule may include evening hours or weekends due to client needs per manager instructions This role will be working in Hybrid Mode and will require at least 2 days’ work from office at Hyderabad. Occasional evening and weekend work may be expected in case of job-related emergencies or client needs. EEO Statement Cendyn provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Cendyn complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Cendyn expressly prohibits any form of workplace harassment based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Improper interference with the ability of Cendyn’s employees to perform their job duties may result in discipline up to and including discharge. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India’s leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Net banking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks’ financial instruments. To further enhance merchants’ business, Paytm offers merchants commerce services through advertising and Paytm Mini app store. Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the team: Paytm Ads is digital advertising vertical that offers innovative ad solutions to clients across industries It offers advertisers the opportunity to engage with 300Mn+ users who interact with over 200 payment; retail services, online and offline - offered on the Paytm app. Paytm Ads maps the user transactions to their lifestyle choices and creates customized segmentation cohorts for sharp shooting ad campaigns to the most relevant TG. Expectations/ Requirements 1.Proficient in SQL/Hive and deep expertise in building scalable business reporting solutions 2. Past experience in optimizing business strategy, product or process using data & analytics 3. Working knowledge in at least one programming language like Scala, Java or Python 4. Working knowledge of Dashboard visualization. Ability to execute cross functional initiatives. 5. Maintaining product & funnel dashboard7.s, metrics on pulse, looker, superset 6.Campaign analytics and debugs 7.Data reporting for business asks, MBR, Lucky wheel revenue, growth experiments Superpowers/ Skills that will help you succeed in this role 1. 5 to 9 years of work experience in a business intelligence and analytics role in financial services, e-commerce, consulting or technology domain 2. Demonstrated ability to directly partner with business owners to understand product requirements 3. Effective spoken and written communication to senior audiences, including strong data presentation and visualization skills 4. Prior success in working with extremely large datasets using big data technologies 5. Detail-oriented, with an aptitude for solving unstructured problems Why join us -A collaborative output driven program that brings cohesiveness across businesses through technology -A solid 360 feedbacks from your peer teams on your support of their goals With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Job description Role Overview: Data Engineer who can efficiently fetch, structure, and manage blockchain data from The Graph and other sources. You will be responsible for setting up ETL pipelines, transforming raw blockchain data into structured formats, and making it accessible for backend APIs and internal analytics. Your work will empower our backend developers to build efficient APIs and enable meaningful data visualizations for users. Key Responsibilities: Fetch and process blockchain data from The Graph and other sources. Design and implement ETL (Extract, Transform, Load) pipelines to structure raw data. Store and optimize data for efficient querying and API consumption. Identify patterns, trends, and insights from blockchain data to enhance analytics. Ensure data integrity, consistency, and performance in a scalable architecture. Work closely with the backend team to provide well-structured data for APIs. Optimize database performance for real-time and historical analytics. Implement caching, indexing, and aggregation strategies for large-scale data processing. Required Skills & Experience: Knowledge of database management (SQL, NoSQL, time-series DBs, or data lakes). Strong experience with The Graph (substream) is preferred. Expertise in ETL processes and data pipeline management. Experience with data modeling and structuring for analytics & API consumption. Proficiency in Python, Node.js, or Rust for data processing. Ability to detect patterns and trends in blockchain transaction data. Experience handling large-scale datasets efficiently. Nice to Have: Experience with blockchain indexing beyond The Graph (custom indexers, RPC data extraction). Prior experience in financial or DeFi data analytics. Familiarity with machine learning for pattern detection in financial transactions.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Data Quality Engineer JD Collaborate with product, engineering, and customer teams to gather requirements and develop a comprehensive data quality strategy Lead data governance processes, including data preparation, obfuscation, integration, slicing, and quality control Test data pipelines, ETL processes, APIs, and system performance to ensure reliability and accuracy Prepare test data sets, conduct data profiling, and perform benchmarking to identify inconsistencies or inefficiencies Create and implement strategies to verify the quality of data products and ensure alignment with business standards Set up data quality environments and applications in compliance with defined standards, contributing to CI/CD process improvements Participate in the design and maintenance of data platforms and build automation frameworks for data quality testing, including resolving potential issues Provide support in troubleshooting data-related issues, ensuring timely resolution Ensure all data quality processes and tools align with organizational goals and industry best practices Collaborate with stakeholders to enhance data platforms and optimize data quality workflows Requirements Bachelor’s degree in Computer Science or a related technical field involving coding, such as physics or mathematics At least three years of hands-on experience in Data Management, Data Quality verification, Data Governance, or Data Integration Strong understanding of data pipelines, Data Lakes, and ETL testing methodologies Proficiency in CI/CD principles and their application in data processing Comprehensive knowledge of SQL, including aggregation and window functions Experience in scripting with Python or similar programming languages Databricks and snowflake experience is must . Good exposure to notebook ,sql editor etc Experience in developing test automation frameworks for data quality assurance Familiarity with Big Data principles and their application in modern data systems Experience in data analysis and requirements validation, including gathering and interpreting business needs Experience in maintaining QA environments to ensure smooth testing and deployment processes Hands-on experience in Test Planning, Test Case design, and Test Result Reporting in data projects Strong analytical skills, with the ability to approach problems methodically and communicate solutions effectively English proficiency at B2 level or higher, with excellent verbal and written communication skills Nice to have Familiarity with advanced data visualization tools to enhance reporting and insights Experience in working with distributed data systems and frameworks like hadoop

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Surat, Gujarat, India

On-site

About the job About Avinyaa EdTech: At Avinyaa EdTech, we’re not just building a product — we’re building possibilities. We are an education technology company committed to making future-ready careers accessible for everyone. Through a learner-first approach, we provide skilling pathways, mentorship, and practical career support to help individuals unlock their full potential. We’re driven by a team of passionate technologists, educators, and designers working on high-impact solutions. If you’re excited about technology with a purpose and love solving real-world problems, you’ll feel right at home here. About the Role: We’re hiring a Backend Developer who’s confident working with frameworks like FastAPI and Flask , and has hands-on experience with MongoDB . In this role, you’ll design and build scalable backend services that power our core learning and career platforms. You’ll work closely with front-end developers, product designers, and QA engineers to deliver reliable, high-performance APIs and backend logic. This role is ideal for someone with 3–4 years of strong backend development experience , who enjoys clean architecture, performance tuning, and working in a fast-paced environment. Key Responsibilities: · Design, develop, and maintain robust backend systems using Python (FastAPI/Flask) · Build RESTful APIs and integrate with frontend, mobile, and third-party systems · Work with MongoDB for data modeling, query optimization, and schema design · Write clean, testable, and well-documented code following best practices · Optimize performance, scalability, and security of backend services · Collaborate closely with product managers, frontend developers, and QA engineers · Troubleshoot bugs, identify bottlenecks, and implement long-term fixes · Participate in code reviews, architecture discussions, and sprint planning Education Requirements: · Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field · Certifications in Python or backend development are a bonus What You Bring: · 3–4 years of backend development experience using Python · Proficiency in FastAPI and/or Flask frameworks · Strong understanding of REST API design , security, and integration · Experience working with MongoDB (Mongoose, indexes, aggregation pipelines, etc.) · Familiarity with version control (Git), CI/CD pipelines, and cloud deployments · Ability to write unit and integration tests using Pytest or similar frameworks · Clear understanding of asynchronous programming and performance tuning in Python · Team-oriented mindset and a passion for writing clean, scalable code Bonus if you have: · Experience working in a startup or edtech/SaaS environment · Knowledge of Docker, Kubernetes, or deployment on platforms like AWS/GCP · Familiarity with SQL databases or hybrid setups Why Join Avinyaa EdTech? · Mission-led, purpose-driven culture. · A chance to work at the intersection of technology and social impact.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚨 Urgent Hiring: Java Spring Boot Developer – 7+ Years Experience | Gurgaon (On-site) 📍 Location: Gurgaon , India Type: Full-Time (preferred) Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel. The ideal candidate should possess a strong understanding of enterprise integration patterns, stream processing, and protocols, and have experience with observability tools and concepts in AI-enhanced applications. Key Responsibility : Design, develop, and deploy Java Spring Boot(must) applications on OpenShift(ready to learn RedHat open shift or already have Kubernetes experience). Build robust data pipelines with Apache Kafka(must) for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Camel(basic knowledge and ready to learn) and Enterprise Integration Patterns (EIPs). Develop components that interface with various protocols including HTTP, JMS, and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, Open Telemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus) Experience with stream processing technologies such as Apache Flink, Kafka Streams, or Spark Streaming. Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms. In-depth understanding of database concepts – both relational and NoSQL. Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments ⚠️ Important Notes Only candidates with a notice period of 20 days or less will be considered PF account is Must for joining Full time If you have already applied for this job with us, please do not submit a duplicate application. Budget is limited and max CTC based on years of experience and expertise. 📬 How to Apply Email your resume to career@strive4x.net with the subject line: Java Spring Boot Developer - Gurgaon Please include the following details Full Name Mobile Number Current Location Total Experience (in years) Current Company Current CTC Expected CTC Notice Period Are you open to relocating to Gurgaon (Yes/No)? Do you have PF account (Yes/No)? Do you prefer full time or Contract or both ? 👉 Know someone who fits the role? Tag or share this with them #JavaJobs #SpringBoot #GurgaonJobs #Kafka #ApacheCamel #OpenShift #HiringNow #SoftwareJobs #SeniorDeveloper #Microservices #Strive4X

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Foxit is remaking the way the world interacts with documents through advanced PDF technology and tools. We are a leading global software provider of fast, affordable, and secure PDF solutions that are used by millions of people worldwide. Winner of numerous awards, Foxit has customers in more than 200 countries and global operations. We have a complete product line and an exciting and aggressive development schedule. Our proven PDF technology is disrupting the status quo establishment and has accelerated our company growth. We are proud to list as customers Google, Amazon, and NASDAQ, and with your skills and help, we plan to add many more. Foxit has offices all over the world, including locations in the US, Asia, Europe, and Australia. For more information, please visit https://www.foxit.com . Role Overview: As a DevOps Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities: Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications: Technical Skills: Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experiences Required: Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, with hands on building and maintaining cloud infrastructure using AWS and/or Azure and global software deployment practices. Experience designing and deploying microservices architectures using Docker and Kubernetes (EKS/AKS). Strong background in Infrastructure as Code (IaC) using Terraform, with scripting in Python/Bash for automation. Experience working with databases (MySQL, MongoDB, Redis) in high-concurrency environments. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred to have: Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies