Jobs
Interviews

129 Anomaly Detection Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

22 - 27 Lacs

bengaluru

Work from Office

About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Staff Machine Learning Software Engineer to join our digital experience team . Reporting to the Director, you'll be responsible for: Leading identification and resolution of performance issues by developing advanced AI/ML models that pinpoint root causes of poor experience and detect performance bottlenecks for users Developing, maintaining, and refining predictive models to forecast user behavior, system performance, and potential friction points in the digital experience Implementing advanced AI/ML algorithms to detect and forecast anomalies in user experience across multiple dimensions Overseeing the entire lifecycle of ML projects, including analysis, training, testing, building, and deploying ML models into production environments Designing and creating compelling visualizations to effectively communicate findings and insights to both technical and non-technical stakeholders What We're Looking for (Minimum Qualifications) A Bachelors or Master’s(preferable) degree in Computer Science, Data Science, Statistics, or a related field with 6+ years of professional experience in data science or a related role Proficiency with data science tools and platforms such as Python, R, TensorFlow, SQL, and related libraries and frameworks Strong experience with networking and end-point observability systems Expertise in multi-dimensional anomaly detection algorithms, with a specific focus on time series data sets Strong experience in building and deploying ML models in production environments, including model orchestration using tools like Kubernetes and Airflow What Will Make You Stand Out (Preferred Qualifications) Published research or contributions to digital experience/end-user observability/networking/ data science community Demonstrated experience in the monitoring space, with a deep understanding of user experience metrics, monitoring tools, and methodologies Experience with designing complex systems and scaling ML models on large scale distributed systems #LI-Hybrid #LI-AN4 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! Learn more about Zscaler’s Future of Work strategy, hybrid working model, and benefits here. By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

Role Overview: You have an amazing opportunity in a global organization like Linde, where the possibilities are limitless. You will have the chance to go beyond your job description and contribute towards bettering the people you work with, the communities you serve, and the world we all live in. Your role will involve analyzing operational data, supporting digital transformation, managing key applications, and contributing to AI-driven optimization across Linde's global production assets. Key Responsibilities: - Collect, clean, and analyze large datasets from industrial operations, including process data and maintenance logs. - Work with cross-functional teams to identify opportunities for process improvements and operational efficiencies. - Support the validation of machine learning models for predictive maintenance, energy optimization, and anomaly detection. - Lead and/or support the maintenance and improvement of key global applications, visualize data, and present actionable insights to business stakeholders. - Assist in building dashboards and reporting tools using platforms like Power BI or Tableau. - Collaborate closely with Data Scientists, Process Engineers, and IT teams to operationalize data-driven solutions and stay updated on digital trends in process industries to suggest innovative improvements. Qualifications Required: - Degree in Chemical Engineering, Process Engineering, or a related field, with a master's or PhD preferred. - Minimum 2 years of experience in industrial operations, process optimization, and data analytics. - Experience in the industrial gases sector or process manufacturing would be a plus. - Foundational experience in machine learning, managing and supporting SW applications, Python, Matlab, and SQL for data analysis, and data visualization tools (Power BI, Tableau, Grafana, etc.). - Strong analytical skills, experience working with large datasets, solid understanding of industrial processes and KPIs. - Excellent communication skills and ability to work in international, cross-functional teams.,

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

Role Overview: As the Director of Data Insights & Automation at Statistics & Data Corporation (SDC), your primary responsibility will be to direct the development and maintenance of systems, infrastructure, and personnel to enable reliable, cost-effective automation and machine learning solutions within SDC and for external partners and clients. You will oversee the cleaning, processing, storing, and analyzing of clinical data for the development of Artificial Intelligence (AI) and Machine Learning (ML) algorithms to support better understanding of new therapies" safety and efficacy. Additionally, you will lead software development activities to drive automation across SDC departments, reducing errors and increasing efficiency. Key Responsibilities: - Oversee day-to-day activities involving data science, data engineering, automation, and business intelligence - Develop standard metrics demonstrating model performance, robustness, and validity - Engage with various internal departments to strategically identify areas to apply AI and/or ML for increased process efficiency - Ensure practical deployment of algorithms with minimal resource usage and maximum ease of use - Prototype new ideas/technologies and develop proof of concept and demos - Develop standard operating procedures for the use of artificial intelligence within clinical trials - Communicate roadmap of activities and updates quarterly to executive management - Ensure timelines and delivery are met, raising issues early for any risks - Perform other related duties incidental to the work described herein Qualifications Required: - Experience with AI frameworks such as Tensorflow, MXNet, Theano, Keras, Pytorch, Caffe - Proficiency in Python and software development process with evidence of delivering production-level code - Familiarity with AI algorithms for natural language processing, classification, clustering, dimensionality reduction, anomaly detection - Experience in applying AI to biomedical data analysis preferred - Ability to develop and deliver presentations, and communicate effectively in writing and verbally - Capability to identify issues, present problems, and implement solutions - Good leadership, organizational, and time management skills with the ability to multi-task - Strong interpersonal communication and presentation skills Additional Company Details: Statistics & Data Corporation (SDC) is a specialized Contract Research Organization (CRO) providing top-tier clinical trial services to pharmaceutical, biologic, and medical device/diagnostic companies since 2005. SDC offers technology-enabled service offerings to provide clients with clinical services expertise and the necessary technology for successful clinical trials. The company is committed to developing employees, providing growth opportunities, career advancement, flexible work schedules, engaging work culture, and employee benefits. SDC's recognition program is tied to core values, fostering a place of belonging through fun and engaging activities. With a global presence and diverse perspectives, SDC continues to grow and innovate to support client and employee needs successfully.,

Posted 3 days ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

bengaluru, karnataka, india

On-site

Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models. Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data. Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features. Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training. Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment. Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks. Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration. Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction. Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes. Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems. Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders. 6-8 years of experience in data science with a significant focus on predictive maintenance and anomaly detection. Qualifications Bachelor s or Master s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

telangana

On-site

As a skilled Technical Lead, you will be responsible for leading the end-to-end design and delivery of an AI-powered solution that combines NLP, anomaly detection, GenAI/RAG, and rule engines on a Cloud platform. Your role will involve owning the architecture, technical roadmap, and production reliability while guiding a cross-functional team comprising ML, Data Eng, Backend, DevOps, and QA professionals. Key Responsibilities: - Define the reference architecture including ingestion, lakehouse, features/vectors, models, and APIs/UX; make build/buy decisions - Select, train, and operationalize NLP, anomaly/fraud models, and GenAI/RAG components; establish human-in-the-loop - Implement experiment tracking, model registry, CI/CD for models, automated evaluation, drift monitoring, rollback - Design retrieval pipelines such as chunking, embeddings, vector namespaces, guardrails, and citation-based responses - Oversee feature store, labelling strategy, and high-quality gold datasets; enforce DQ rules and lineage - Right-size SKUs, caching/batching, cost/per-token dashboards, SLOs for latency/throughput - Break down epics/stories, estimate and sequence work, unblock the team, run technical design reviews - Translate business policy into rule + ML ensembles, present options, risks, and trade-offs - Establish testing pyramid, performance targets, observability dashboards - Produce design docs, runbooks, SOPs, mentor engineers, uplift coding and review standards Skills Requirements: - 10+ years of software development experience, with at least 5 years leading AI/ML projects - Proficiency in supervised/unsupervised modeling, anomaly detection, NLP, OCR pipelines, and evaluation design - Experience with LangChain/LangGraph, embeddings, MLflow, Delta Lake, Python (FastAPI), and Azure services - Strong technical decision-making, cross-team coordination, and stakeholder communication skills Qualifications: - Bachelor's Degree in Computer Science or related science field or equivalent Preferred Skills: - Certifications in AI/ML disciplines - Hands-on experience with explainable AI and AI governance - Familiarity with regulatory compliance standards for financial data (Note: Additional company details were not provided in the job description.),

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Applied Scientist at Oracle Cloud Infrastructure, you will be part of the Generative AI and AI Solutions Engineering team, working on building advanced AI solutions that address real-world challenges. Your role will involve: - Designing, building, and deploying cutting-edge machine learning and generative AI systems, focusing on Large Language Models (LLMs), AI agents, Retrieval-Augmented Generation (RAG), and large-scale search. - Collaborating with scientists, engineers, and product teams to transform complex problems into scalable AI solutions ready for the cloud. - Developing models and services for decision support, anomaly detection, forecasting, recommendations, NLP/NLU, speech recognition, time series, and computer vision. - Conducting experiments, exploring new algorithms, and pushing the boundaries of AI to enhance performance, customer experience, and business outcomes. - Ensuring the implementation of ethical and responsible AI practices in all solutions. You are an ideal candidate for this role if you possess deep expertise in applied ML/AI, hands-on experience in building production-grade solutions, and the ability to innovate at the intersection of AI and enterprise cloud. Join us at Oracle Cloud Infrastructure to blend the speed of a startup with the scale of an enterprise leader.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

Role Overview: You will be responsible for selecting features, building and optimizing classifiers using machine learning techniques, data mining using state-of-the-art methods, enhancing data collection procedures, processing, cleansing, and verifying data integrity, performing ad-hoc analysis, creating automated anomaly detection systems, leading project meetings, managing multiple development designs, providing customer training, participating in internal projects, and having hands-on experience with data models and data warehousing technologies. You must be organized, analytical, have excellent communication skills, and be proficient in using query languages. Key Responsibilities: - Select features, build and optimize classifiers using machine learning techniques - Perform data mining using state-of-the-art methods - Enhance data collection procedures and ensure data integrity - Process, cleanse, and verify data for analysis - Conduct ad-hoc analysis and present results clearly - Create automated anomaly detection systems - Lead project meetings and manage multiple development designs - Provide customer training and participate in internal projects - Have hands-on experience with data models and data warehousing technologies - Possess good communication skills and proficiency in using query languages Qualifications Required: - Excellent understanding of machine learning techniques and algorithms - Experience with common data science toolkits such as R, Weka, NumPy, MatLab - Experience in SAS, Oracle Advanced Analytics, SPSS will be an advantage - Proficiency in using query languages such as SQL, Hive, Pig - Experience with NoSQL databases such as MongoDB, Cassandra, HBase will be an advantage - Good applied statistics skills and scripting/programming skills - Bachelor/Master in Statistics/Economics or MBA desirable - BE/BTech/MTech with 2-4 years of experience in data science Additional Details: The company values skills such as communication, coaching, building relationships, client service passion, curiosity, teamwork, courage, integrity, technical expertise, openness to change, and adaptability. (Note: The company did not provide any additional details in the job description.),

Posted 4 days ago

Apply

4.0 - 8.0 years

8 - 16 Lacs

bengaluru

Work from Office

Role & responsibilities : 1.Design and implement anomaly detection models using statistical and unsupervised learning techniques. 2.Optimize AI models for edge devices. 3.Work with time-series and sensor data, applying appropriate preprocessing, noise reduction, and feature extraction techniques. 4.Utilize deep learning architecture such as CNNs, RNNs, LSTM, GRU, and GANs. 5.Perform model evaluation, optimization, cross-validation, and hyperparameter tuning. 6.Visualize and analyze datasets using Python tools like Pandas, Matplotlib, and Seaborn. 7.Collaborate with cross-functional teams to translate business needs into AI-powered solutions. Good to have: 1.Sensor data processing (noise reduction, feature extraction, preprocessing techniques). 2.Gen AI and applying RAG technique would be added value. Interested candidate kindly share me your updated CV at jeeva@bvrpc.com

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

bangalore, karnataka

On-site

As a Principal Applied Data Scientist at Oracle Cloud Infrastructure, you will be responsible for designing, building, and deploying cutting-edge machine learning and generative AI systems. Your focus will primarily be on Large Language Models (LLMs), AI agents, Retrieval-Augmented Generation (RAG), and large-scale search. Key Responsibilities: - Collaborate with scientists, engineers, and product teams to transform complex problems into scalable AI solutions suitable for enterprises. - Develop models and services for decision support, anomaly detection, forecasting, recommendations, NLP/NLU, speech recognition, time series, and computer vision. - Conduct experiments, explore new algorithms, and push the boundaries of AI to enhance performance, customer experience, and business outcomes. - Ensure the implementation of ethical and responsible AI practices across all solutions. Qualifications Required: - Deep expertise in applied ML/AI. - Hands-on experience in building production-grade solutions. - Creativity to innovate at the intersection of AI and enterprise cloud. Oracle Cloud Infrastructure is known for blending the speed of a startup with the scale of an enterprise leader. The Generative AI Service team focuses on building advanced AI solutions that address real-world, global challenges using powerful cloud infrastructure.,

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

As a Principal/Lead AI Engineer at Simpplr, you will play a crucial role in transforming the platform from AI-enabled to AI-native. Your responsibilities will include designing, coding, and leading the development of Simpplr's AI-native platform with a focus on explainability, traceability, observability, and auditing. You will deliver enterprise-scale AI capabilities using advanced technologies such as MCP, RAG, multimodal processing, LLMs, SLMs, and custom ML models. Additionally, you will define technical strategies, align with product and engineering leaders, and champion Responsible AI principles to ensure compliance with security and governance standards. Key Responsibilities: - Design, code, and lead the build of Simpplr's AI-native platform with core capabilities of explainability, traceability, observability, and auditing. - Deliver enterprise-scale AI capabilities using technologies such as MCP, RAG, multimodal processing, LLMs, SLMs, and custom ML models. - Define technical strategy and roadmap, align with product and engineering leaders to deliver customer value at speed and scale. - Establish Responsible AI principles including bias detection, privacy-by-design, and compliance with security and governance standards. - Build robust AIOps and MLOps pipelines with versioning, CI/CD, monitoring, rollback, and drift detection. - Mentor engineers and data scientists to build high-trust and high-performance AI teams. - Partner with cross-functional teams to integrate AI into core product experiences and ensure measurable impact on user engagement and productivity. Qualifications Required: - 8+ years in applied AI/ML engineering with proven leadership in building complex, high-scale AI platforms or products. - Experience in designing and delivering advanced AI solutions using LLMs/SLMs, MCP architecture, RAG pipelines, Vector DBs, Agentic AI workflows, and automations. - Strong background in AIOps/MLOps, data engineering, and large-scale inference optimization. - Ability to build and nurture high-performing AI teams and foster a culture of innovation, accountability, and trust. - Curiosity and eagerness to learn about next generations of agentic AI workflows, personalization, and LLM-based solutions. Technical Skill Sets: - AI/ML: LLMs, MCP, RAG, multimodal generation, SLMs, prompt engineering. - Programming: Python, PyTorch/TensorFlow, scikit-learn. - Data Engineering: SQL/NoSQL, Snowflake, BigQuery, Databricks. - MLOps & AIOps: CI/CD, model versioning, monitoring, observability, Docker, Kubernetes, Kafka, Spark, MLFlow. - Cloud Infrastructure: AWS, GCP, Azure; Terraform, Ansible. - Responsible AI & Governance: Bias detection, interpretability, audit trails, privacy-by-design. You will be an ideal candidate if you are a builder-leader who can design enterprise-grade systems, ship production code, and mentor a high-performance team. Your focus on Responsible AI and operational realities will be key in creating the backbone of Simpplr's Agentic universe. Excited by the challenge of building the foundational components of an AI-native platform, you will drive innovation and excellence in AI engineering at Simpplr.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a strategic thinker passionate about driving solutions in release management and data integrity. You have found the right team. As a Release Manager in our Finance Data Insights Release Management team, you will spend each day ensuring proper controls and change management processes are strictly followed, delivering accurate, complete, and consistent data for both internal financial reporting and external regulatory requirements. As a Release Manager Associate, you will work closely with Line of Business stakeholders, data Subject Matter Experts (SMEs), consumers, and technology teams across Finance, Credit Risk & Treasury, and various Program Management teams to provide effective risk mitigation and create a great user experience for every stakeholder utilizing our supported products. - Drive the entirety of Change Events/Releases across all the Data warehouses/Lakes, which comprises of both planned and ad hoc events. - Manage Stakeholder management across the entire change management lifecycle, including influencing, negotiation, and expectation management. - Issue resolution and escalation of critical risks. - Create Decks that drive strategy conversations in support of Modernization Journey. - Work on Documentation/ Tracking/Metrics of all supported product artifacts to continue to drive for better user experience. - Execute anomaly detection/Regression testing activities to ensure requirements are in line with expectations for all impacted stakeholders. - Program manage key initiatives and continue to influence modernization efforts. **Qualifications Required**: - Bachelor's degree and 5 years of Project/Product/Business management, Business analysis experience and/or process re-engineering required. - Data analytics skillset with ability to slice and dice data using various toolsets (I.e. Alteryx) and query languages (I.e. SQL). - Proven experience in managing stakeholder relationship and creative Data Story telling. - Highly skilled in creating presentation and reporting or producing metrics. - Must be detail-oriented, highly responsible and able to work with tight deadlines. - Strong written and verbal communication skills, with the ability to tailor messaging to various audiences. - Strong analytical/problem-solving skills, with the ability to learn quickly and assimilate business/technical knowledge. - Advanced excel skills or any other analytical toolset. **Preferred qualifications**: - Agile delivery mindset and usage of JIRA tool, SQL or JQL. - Previous experience in Financial Services or Consulting role is a plus. - Alteryx. - Data Mesh or Cloud Strategy knowledge is a Plus. - Excellent Presentation and Communication; with expertise in PowerPoint or other presentation tools.,

Posted 5 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a professional services firm affiliated with KPMG International Limited, KPMG in India has been serving clients since its establishment in August 1993. Leveraging the global network of firms, our professionals possess a deep understanding of local laws, regulations, markets, and competition. With offices situated across various cities in India such as Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara, and Vijayawada, we offer services to national and international clients across diverse sectors. Within KPMG in India, we have a specialized Forensic Technology lab dedicated to the recovery and utilization of crucial digital evidence to support investigations and litigation. By utilizing a range of tools, including proprietary, open source, and vendor tools, we can extract, transform, and visualize information from various sources in any format, including laptops, mobile phones, and other electronic devices. These tools enable us to ascertain the existence of erased or modified evidence, analyze electronic content and Internet usage patterns, recover deleted data, and evaluate metadata within recovered files, even in cases where attempts were made to destroy it. We are currently seeking individuals with 2-5 years of experience, preferably from Big 4 firms, who possess hands-on experience in data crunching through SQL. Technical knowledge of methods of anomaly detection, experience in fraud investigations and fraud analytics, as well as proficiency in report writing using MS Office are key qualifications for this role. Candidates should also demonstrate proficiency in Microsoft Office and Windows-based applications, strong communication, presentation, and organizational skills, and the ability to effectively communicate complex technical problems in simple terms. Additionally, the ideal candidate should be capable of presenting information professionally and concisely with supporting data, working independently in a dynamic environment with tight deadlines, and engaging with cross-functional teams to drive projects forward. KPMG in India is an equal opportunity employer.,

Posted 6 days ago

Apply

1.0 - 5.0 years

0 Lacs

maharashtra

On-site

The primary responsibility in this role is to maintain Accuracy/Quality in line with the standards set by the Business Unit for all SAFR reviews. This involves managing workload/volumes and delivery expectations according to business requirements. It is essential to develop a comprehensive understanding of the business process for which the reviews are conducted. Regularly updating the centralized inbox and tracking database is a key task, along with maintaining detailed records of communication with all involved parties, including any changes made. An extreme focus on quality is crucial, with a clear understanding of the financial and legal implications involved. Drawing Leadership attention to any anomalies within the process is part of the role, as well as actively participating in all interactions such as team huddles and stakeholder discussions. Adherence to regulatory requirements within the organization is also a necessary aspect of the job. Top 5 Competencies required for this role include Focusing on Clients, Working in Teams, Driving Excellence, Influencing Stakeholders, and Adapting to Change. Qualifications for this position include being a Graduate or Postgraduate. The shift timings are from 1:30 PM to 10:30 PM and the location is Vikhroli.,

Posted 6 days ago

Apply

0.0 years

0 Lacs

jaipur, rajasthan, india

On-site

Job Description Responsible for end-to-end implementation and configuration of SIEM(LogRhythm) and SOAR(Cortex) solutions across customer environments Onboard diverse log sources (cloud, on-prem, endpoint, network) into the LogRhythm SIEM platform and normalize data(Including Supported and Non Supported Devices) Design and implement Standard and Custom detection rules, dashboards, and Reports. Including UEBA, NBA, MITRE, Logsource based and Cross Correlation Usecases Collaborate with SOC, threat intel, TPM and Internal teams to enhance security posture and streamline incident response. Troubleshoot log ingestion and parsing errors. Implement threat intelligence integration to enrich alerts and improve contextual awareness. Ensure compliance with security best practices, frameworks (e.g., MITRE ATT&CK, NIST) Provide documentation, runbooks, LLDs to Operations team as part of Handover Stay current with emerging threats, tools, and technologies in the SIEM/SOAR ecosystem. Collaborate with Assurance team to ensure Smooth handover of projects, follow and adhere to defined Responsibilities Design, implement, and maintain LogRhythm SIEM, Cortex SOAR, and LogRhtyhm UEBA solutions across cloud and on-premise environments. Collaborate with stakeholders to gather and analyze security monitoring and automation requirements. Onboard, parse, and normalize data from diverse log sources including cloud (AWS, GCP, Azure), EDRs, firewalls, proxies, and identity systems. Develop and fine-tune correlation rules, detection use cases, and alerting logic based on attacker TTPs (aligned to MITRE ATT&CK). Configure and customize UEBA models to detect abnormal user and entity behavior (e.g., data exfiltration, lateral movement). Integrate third-party threat intelligence feeds for enrichment and contextual detection. Conduct testing, tuning, and validation of detection and response logic to reduce false positives and improve fidelity. Provide Level 2 support for SIEM/SOAR/UEBA issues during project delivery lifecycle and work closely with SOC, TPM and Customer teams Prepare technical documentation, runbooks and LLDs Continuously monitor industry trends, product updates, and threat intelligence to improve detection coverage. Desired Skill sets Hands-on experience with SIEM platforms Experience with SOAR platforms Proficiency with UEBA solutions Strong understanding of log parsing, normalization, and data onboarding using Syslog, APIs, agents, or collectors. Expertise in developing correlation rules, detection logic, and custom parsers. Experience building and maintaining OOTB SOAR playbooks for automated incident response. Familiarity with behavioral analytics, anomaly detection, and machine learning models in UEBA systems. Knowledge of network protocols, Network logging, OS Logging,endpoint telemetry, and cloud security logging (e.g. VPC flow logs, CloudTrail, Azure Activity Logs). OEM Certifications CEH, Comptia Security+ or similar CSP Security Certifications(Ex. AZ-500)

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

maharashtra

On-site

The Firmwide Resiliency Office (FRO), part of the Office of the Chief Finance Officer, is responsible for designing the Firm's Resilience strategy. This includes Resiliency Planning, Testing, Exercising, Reporting, and Product and Concern Management. The team comprises technical product, data, and analytic roles that support business resiliency. FRO collaborates closely with senior leadership, Lines of Business, Functional Resiliency teams, and key functions such as Control Management, Risk Management & Compliance, and Audit to ensure the resiliency program aligns with the firm's risk-taking activities. Additionally, the team provides corporate governance, awareness, and training. As a Data Management Vice President within FRO, you will play a crucial role in supporting data strategies for the Firm's Resilience Program. You will work closely with all areas of FRO and key stakeholders across Lines-of-Business (LOBs) and Corporate Functions (CFs). This role requires an execution-oriented approach, exceptional data analytical skills, and full engagement with the overall program. A key responsibility will be implementing initiatives to enhance and automate resiliency data management frameworks. You will design and implement data strategies, facilitate data sharing, and establish data governance and controls using advanced data wrangling and business intelligence tools. Your expertise in SQL, Python, data transformation tools, and experience with AI/ML technologies will be essential in driving these initiatives forward. Design, develop, and maintain scalable data pipelines and ETL processes using Databricks and Python and write complex SQL queries for data extraction, transformation, and loading. Develop and optimize data models to support analytical needs and improve query performance and collaborate with data scientists and analysts to support advanced analytics and AI/ML initiatives. Automate data processes and reporting using scripting utilities like Python, R, etc. Perform in-depth data analysis to identify trends and anomalies, translating findings into reports and visualizations. Collaborate with stakeholders to understand data requirements and deliver data services promptly. Partner with data providers to design data-sharing solutions within a data mesh concept. Oversee data ingestion, storage, and analysis, and create rules for data sharing and maintain data quality and integrity through automated checks and testing. Monitor and analyze data systems to enhance performance and evaluate new technologies and partner with technology teams and data providers to address data-related issues and maintain projects. Contribute to the design and implementation of data governance frameworks and manage firm-wide resiliency data management frameworks, procedures, and training. Stay updated with emerging data technologies and best practices and develop and maintain documentation for data engineering processes. Lead and manage a team of data professionals, providing guidance, mentorship, and performance evaluations to ensure successful project delivery and professional growth. Required qualifications, skills and capabilities: - Bachelor's degree in computer science, Data Science, Statistics, or a related field, or equivalent experience. - Expert in SQL for data manipulation, querying, and optimization, with advanced database experience. - Proficient in scripting utilities like Python, R, etc.; including data analysis libraries such as Pandas and NumPy. - Proficient in data transformation tools like Alteryx, and Tableau, and experienced in working with APIs. - Direct experience with Databricks, Spark, and Delta Lake for data processing and analytics. - Experience in data reconciliation, data lineage, and familiarity with data management and reference data concepts. - Excellent analytical, problem-solving, and communication skills, with a collaborative and team-oriented mindset. - Solid knowledge of software architecture principles, cloud-native design (e.g., AWS, Azure, GCP), containerization (Docker, Kubernetes), and CI/CD best practices. - Self-starter with strong verbal, written, and listening skills, and excellent presentation abilities. - Proven influencing skills and the ability to be effective in a matrix environment. - Understanding of operational resilience or business continuity frameworks in regulated industries Preferred qualifications, skills and capabilities: - 10+ years of experience in data management roles such as Data Analyst or Data Engineer. - Strong understanding of data warehousing concepts and principles. - Skilled in handling large, complex datasets for advanced data analysis, data mining, and anomaly detection. - Experience with AI/ML technologies and frameworks,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

The Balance Sheet Management (BSM) unit is a division within Treasury and is part of Citigroup's broader Finance organization. The specific responsibilities of Balance Sheet Management cover five key areas: Asset and Liability Management (ALM): FTP Governance & Execution: Establishes and oversees the policies for pricing business assets and liabilities based on interest rates and liquidity values. Executes various transfer pricing processes firm-wide and ensures minimal retained costs and balance sheet in Treasury through an effective allocation process. Interest Rate Risk Management: Evaluates, analyzes, and reports Interest Rate Risk in the accrual businesses for Citigroup. Balance Sheet Analytics & Management: Conducts analytics related to the firm's balance sheet, NIR/NIM, and associated financial metrics to drive financial resource allocations. Leads internal and external communication of the strategy and performance of the balance sheet. Capital Management & Portfolio Analytics: Proposes Capital Policy design to Capital Committee, manages capital distribution strategy, and designs methodologies for allocating capital limits. Leads the design and delivery of portfolio management reporting and analytical capabilities to support Treasury's securities portfolios. Balance Sheet Costing (Infrastructure): Leads the design, development, and implementation of the firm's Funds Transfer Pricing system. Balance Sheet Costing (Methodology): Leads cross-functional working groups to design and evolve Balance Sheet costing frameworks and methodologies. Attributing resource costs/benefits across users/providers of balance sheet resources. Asset Allocation: Designs and manages Treasury's allocation process on security portfolio strategy for Citigroup's $400bn+ global liquidity portfolios. Performs review and challenge of the Stress Testing results for securities portfolios and oversees model governance for valuation and forecasting of AFS/HTM asset classes. The Balance Sheet Management Analyst will focus on designing, developing, and optimizing Tableau dashboards, data pipelines, and reporting solutions to support IRRBB (Interest Rate Risk in the Banking Book) and Balance Sheet Management functions. The role requires strong technical skills in data visualization, data flows, and data engineering to ensure scalable and automated Treasury analytics. This position is ideal for candidates experienced in building interactive Tableau dashboards, developing ETL pipelines, and applying data science techniques to analyze and transform large financial datasets. The analyst will collaborate with various teams to improve risk analytics, automate reporting, and optimize data-driven decision-making processes. Responsibilities: - Develop and optimize Tableau dashboards for real-time IRRBB, balance sheet, and liquidity analytics, providing intuitive visual insights for Treasury stakeholders. - Design and maintain automated data pipelines, integrating SQL, Python, and ETL processes to streamline financial data ingestion, transformation, and reporting. - Enhance data flows and database structures to ensure high data accuracy, consistency, and governance across Treasury risk reporting. - Implement data science methodologies for time series analysis, trend forecasting, and anomaly detection to support IRRBB and balance sheet risk analytics. - Automate EUC solutions to replace manual reporting processes with scalable, repeatable, and efficient workflows. - Collaborate with cross-functional teams to understand data needs, enhance reporting capabilities, and ensure alignment with regulatory and business requirements. Qualifications: - 5+ years of experience in banking/finance, focusing on Balance Sheet Management, IRRBB, Treasury, or Risk Analytics. - Strong expertise in Tableau dashboard development, including data visualization best practices and performance optimization. - Experience in building and maintaining data pipelines using SQL, Python, and ETL tools for financial data processing. - Familiarity with data science techniques applied to financial datasets. - Proficiency in data governance, validation, and reconciliation for accuracy in Treasury risk reporting. - Strong analytical and problem-solving skills to translate business needs into technical solutions. - Bachelor's degree in Finance, Computer Science, Data Science, Mathematics, or a related field. Education: - Bachelor's degree, potentially Master's degree. The role emphasizes technical expertise in Tableau, data engineering, and data science while maintaining a strong connection to IRRBB and Balance Sheet Management solutions. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster. Job Family Group: Finance Job Family: Balance Sheet Management Time Type: Full time Most Relevant Skills: Business Acumen, Data Analysis, Internal Controls, Management Reporting, Problem Solving, Process Execution, Risk Identification and Assessment, Transfer Pricing. Other Relevant Skills: For complementary skills, please see above and/or contact the recruiter.,

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

pune, maharashtra

On-site

The leading cloud-based platform provider for the mortgage finance industry, ICE, is seeking an Associate to perform exception reviews on documents and audits at various stages of the loan production process. Your role will involve ensuring compliance with customer requirements and industry regulations while maintaining the highest levels of quality and efficiency. Your responsibilities will include validating document recognition and data extraction, conducting data integrity checks, and following standard operating procedures. You will collaborate with management to address service delivery challenges, uphold data security protocols, and meet defined SLAs. Additionally, you will create bounding boxes around data, label documents, provide labeled datasets for model training, and identify anomalies in the software application. Meeting daily quality and productivity targets and working in 24/7 shifts will be essential aspects of your role. To qualify for this position, you should have a Bachelor's degree or academic equivalent and ideally possess 0 to 2 years of mortgage lending experience, including processing, underwriting, closing, quality control, or compliance review. Proficiency in mortgage document terminology, Microsoft Office tools (Excel and Word), keyboard shortcuts, and Microsoft Windows is preferred. Strong attention to detail, excellent time management, organizational skills, efficiency, ability to work under pressure, and effective communication in English (both spoken and written) are key attributes for success in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Job Description: Omio is building the future of travel pricing by transitioning from manual, rule-based systems to AI-driven dynamic pricing to enhance revenue, efficiency, and competitiveness. As part of this transition, we are seeking an individual to develop cutting-edge solutions and drive the adoption of AI-driven pricing within the company. Main Tasks And Responsibilities: - Build Models: Develop machine learning models such as time series and RNNs to forecast demand and optimization systems for dynamic price recommendations. - Pilot AI Solutions: Design and implement a Proof of Concept (PoC) for dynamic pricing in a test market. - Evangelize AI Pricing: Advocate for the adoption of AI-driven pricing by demonstrating its value to key stakeholders. - Analyse Large Data: Utilize Omio's data and external factors like competitor pricing to optimize pricing decisions. - Drive Automation: Establish systems for real-time anomaly detection and automated price adjustments. - Collaborate: Work with Pricing Commercial, Product, and Engineering teams to integrate AI into business workflows and ensure smooth adoption. Qualifications: - Minimum of 3 years experience in building complex models and algorithms. - Proven track record of designing and deploying models in production, particularly for pricing or forecasting. - Strong communication and presentation skills to effectively explain technical concepts to non-technical stakeholders at a C-level. - Self-driven individual contributor with the ability to work autonomously. - Experience in developing production-ready AI systems from inception is highly desirable. Knowledge & Skills: - Passion for driving organizational change and inspiring teams to adopt data-driven approaches. - Excellent communication skills with stakeholders, including C-Level executives. - Focus on solving practical business problems rather than theoretical model-building. - Proficiency in Python, with expertise in data science libraries like pandas, NumPy, scikit-learn, TensorFlow, and PyTorch. - Familiarity with time-series forecasting, anomaly detection, and Bayesian modeling. - Experience in building low-latency or near real-time processing systems. Additional Information: Upon acceptance of a job offer, background screening will be conducted by Giant Screening in partnership with Omio. Consent will be obtained before any information is shared with the screening service. Omio offers a dynamic work environment where you can be part of shaping the future of travel pricing.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

noida, uttar pradesh

On-site

Presage Insights is a cutting-edge startup specializing in predictive maintenance. The platform leverages AI-powered diagnostics to detect and diagnose machine failures, integrating with an inbuilt CMMS (Computerized Maintenance Management System) to optimize industrial operations. The company is expanding its capabilities in time series analysis and LLM-powered insights. As a Machine Learning Intern at Presage Insights, you will work on real-world industrial datasets, applying AI techniques to extract insights and enhance predictive analytics capabilities. Your role will involve collaborating with the engineering team to develop, fine-tune, and deploy LLM-based models for anomaly detection, trend prediction, and automated report generation. Key Responsibilities: - Research and implement state-of-the-art LLMs for natural language processing tasks related to predictive maintenance. - Apply machine learning techniques for time series analysis, including anomaly detection and clustering. - Fine-tune pre-trained LLMs (GPT, LLaMA, Mistral, etc.) for domain-specific applications. - Develop pipelines for processing and analyzing vibration and sensor data. - Integrate ML models into the existing platform and optimize inference efficiency. - Collaborate with software engineers and data scientists to improve ML deployment workflows. - Document findings, methodologies, and contribute to technical reports. Required Qualifications: - Pursuing or completed a degree in Computer Science, Data Science, AI, or a related field. - Hands-on experience with LLMs and NLP frameworks such as Hugging Face Transformers, LangChain, OpenAI API, or similar. - Strong proficiency in Python, PyTorch, TensorFlow, or JAX. - Solid understanding of time series analysis, anomaly detection, and signal processing. - Familiarity with vector databases (FAISS, Pinecone, ChromaDB) and prompt engineering. - Knowledge of ML model deployment (Docker, FastAPI, or TensorFlow Serving) is a plus. - Ability to work independently and adapt to a fast-paced startup environment. What You'll Gain: - Hands-on experience working with LLMs & real-world time series data. - Opportunity to work at the intersection of AI and predictive maintenance. - Mentorship from industry experts in vibration analysis and AI-driven diagnostics. - Potential for a full-time role based on performance. Note: This is an unpaid internship.,

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 32 Lacs

pune

Work from Office

Python (Primary Skill) Anomaly detection OR Fraud Detection Graph neural network Interested candidates can share resumes at -"Kashif@d2nsolutions.com"

Posted 1 week ago

Apply

2.0 - 5.0 years

10 - 15 Lacs

bengaluru

Work from Office

Roles and responsibilities Develop novel AIML algorithms and methodologies applicable to control systems engineering and anomaly detection problems. Analyse large time series datasets to identify patterns, trends, and insights to deliver algorithms dealing with diagnostics and prognostics of large energy equipment. Implement and optimize AIML algorithms and models. Design and conduct experiments to test the performance and robustness of AIML-based solutions. Document research findings, methodologies, and implementation details for knowledge sharing and future reference. Collaborate with cross-functional teams, including engineers, researchers, and product managers, to integrate AIML solutions into existing system platforms. Keep abreast of the latest advancements in AIML, control systems, and signal processing engineering through literature review and participation in relevant conferences and seminars. Required Qualifications PhD with 2 - 5 Years Experience or Masters degree with 5 - 7 Years Experience from a reputed institution in Computer Science or other engineering field. Strong understanding of fundamental concepts in statistics, machine learning, and artificial intelligence. Proficiency in programming languages commonly used in AIML applications such as Python, PySpark, PyTorch, Pandas, or similar. Experience with data analysis and manipulation using tools like Pandas, NumPy, or MATLAB. Familiarity with software development best practices, including version control (e.g., Git) and agile methodologies. Basic knowledge of cloud-based architectures Excellent problem-solving skills and ability to think critically and creatively. Strong communication skills and ability to work effectively in a collaborative team environment. Demonstrated ability to build prototypes and quickly develop proof of concepts to show solution feasibility.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. We're looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you Then it seems like you'd make a great addition to our vibrant team. We are seeking an experienced Data Scientist with a strong foundation in Python, Machine Learning, and Cloud-based Analytics. You will play a key role in building data-driven energy optimization solutions across buildings and campuses. From time-series analysis to predictive maintenance, you'll work on impactful projects that accelerate energy efficiency and sustainability on a global scale. This position is for Chennai Location. You'll Make a Difference By: o Developing cloud-based data science solutions for energy monitoring, forecasting, and optimization . o Designing and implementing ML algorithms for anomaly detection, forecasting, and predictive simulations . o Collaborating with global software engineering teams to integrate AI/ML features into scalable cloud offerings . o Translating complex data into actionable insights through dashboards and business-friendly reports . o Investigating advanced statistical, AI, and machine learning models to solve real-world optimization problems . o Working on large-scale time-series data from IoT sensors and building automation systems . o Ensuring clean, structured, and high-quality data using modern data engineering best practices o Ability to effectively communicate in English, both written and spoken . You'll Win Us Over If You Have: o B.E. / M.Sc. / MCA / B. Tech in Computer Science / Applied Mathematics or related fields with good academic record . o 5 to 8 years of professional software development experience, with a minimum of 3 years in the analytics field (like data science, business intelligence), using professional Python 3.x & libraries o Proficiency in Python 3.x and libraries like pandas, NumPy, scikit-learn, PyTorch /TensorFlow, statsmodels . o Deep hands-on experience in machine learning, data mining, and algorithm development . o Solid understanding of time-series modeling, forecasting, and anomaly detection . o Working knowledge of cloud platforms (preferably AWS) and data lake architectures . o Strong data wrangling, cleaning, and schema design skills . o Ability to effectively communicate in English, both written and spoken . o A collaborative, team-oriented attitude with a proactive mindset . Bonus Points For: . Experience with Scala, graph analytics, or handling IoT/sensor data . . Hands-on exposure to Angular, JavaScript, or other frontend technologies. . Experience working in Agile software environments (scrum, sprint planning, retrospectives). . Familiarity with Docker, Kubernetes, and GitLab CI/CD. . Knowledge of clean code, TDD, and software integration processes. What You'll Gain: . Collaborate with global product teams with 20+ years of technical excellence. . Work in a disciplined SAFe Agile environment that values both delivery and work-life balance. . Make meaningful contributions to product success in a transparent and empowering culture. . Build scalable platforms that support sophisticated modular applications. Join us and be yourself! We value your unique identity and perspective, recognizing that our strength comes from the diverse backgrounds, experiences, and thoughts of our team members. We are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. We also support you in your personal and professional journey by providing resources to help you thrive. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Chennai and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. Find out more about Siemens careers at: www.siemens.com/careers

Posted 1 week ago

Apply

5.0 - 10.0 years

37 - 50 Lacs

hyderabad

Work from Office

Practical in applying geometric concepts, with strong Python & SQL skills. Creative problem-solver, innovator, able to build agent-driven solutions. Senior dev with TDD, AWS knowledge preferred over Ph.D.

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Machine Learning Engineer, your primary focus will be on analyzing time series data, specifically vibration and acoustic signals, to create models for fault detection, diagnosis, and Remaining Useful Life (RUL) estimation. You will collaborate closely with a diverse team of data scientists, engineers, and domain experts to develop and deploy machine learning algorithms that improve our predictive maintenance capabilities. Your key responsibilities will include: - Analyzing and preprocessing time series data captured by vibration and acoustic sensors to extract relevant features for fault detection and prognostics. - Designing and implementing machine learning models for anomaly detection, clustering, and classification to detect equipment faults and forecast RUL. - Utilizing signal processing techniques to improve data quality and feature extraction procedures. - Working with domain experts to interpret model outcomes and incorporate insights into practical maintenance strategies. - Validating and testing models using historical data and real-time sensor inputs to ensure their robustness and accuracy. - Documenting methodologies, experiments, and findings for knowledge sharing and continuous enhancement. Qualifications for this role include: - A Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or a related field. - A solid grasp of fundamental machine learning concepts and algorithms. - Basic experience with anomaly detection techniques, clustering methods, and signal processing. - Proficiency in programming languages like Python or MATLAB. - Familiarity with data analysis and visualization tools. - Strong problem-solving abilities and the capacity to collaborate effectively within a team. - Excellent communication skills for conveying technical concepts efficiently. Preferred skills that would be beneficial for this role are: - Exposure to time series analysis and prior experience working with sensor data. - Knowledge of predictive maintenance principles and condition monitoring practices. - Experience with deep learning frameworks such as TensorFlow or PyTorch. - Understanding of industrial equipment and maintenance processes. In addition to the challenging work environment, as a Machine Learning Engineer, you can expect: - A competitive salary with performance-based incentives. - Opportunities for professional development and growth. - A collaborative and inclusive workplace culture. - Exposure to cutting-edge technologies within the predictive maintenance domain. - Health and wellness benefits. Skills required for this role include node.js, MATLAB, data visualization, TensorFlow, Python, JavaScript, anomaly detection, clustering, deep learning, signal processing, and more.,

Posted 1 week ago

Apply

2.0 - 3.0 years

12 - 22 Lacs

bengaluru

Work from Office

Job description A RP Sanjiv Goenka Group company. Firstsource is a leading provider of customized Business Process Management (BPM) services. We are trusted custodians and longterm partners to 100+ leading brands with a presence in the US, the UK, India, and Philippines. Our rightshore delivery model offers solutions covering the complete customer lifecycle across Healthcare, Telecommunications & Media and Banking, Financial Services and Insurance Verticals. Our clientele includes Fortune 500 & FTSE 100 companies Collaborate with the Fraud Technical Lead to design and optimize fraud detection rules using insights derived from large-scale data analysis. Investigate emerging fraud trends by interrogating data across multiple systems, identifying patterns and proposing actionable countermeasures. Respond to business-critical questions by developing advanced SQL queries and customized reports. Youll go beyond the numbers to deliver clear, meaningful insights to stakeholders across the business. Support system quality assurance through user acceptance testing of fraud platform upgrades and enhancements, ensuring all changes meet functional requirements and are free of defects. Work with internal support teams and external vendors to resolve system incidents, applying commercial judgement to prioritize and escalate issues appropriately. Key Skills & Experience Advanced SQL proficiency confident writing complex queries involving joins, subqueries, CTEs, and aggregate functions. Experience working with large datasets in an enterprise data warehouse environment (e.g., Oracle, Goggle Cloud Platform). Strong analytical mindset with the ability to translate business problems into data-driven solutions. Comfortable designing and testing fraud detection logic based on behavioral and transactional data. Highly numerate, ideally with experience in a data analysis or fraud analytics role. Previous fraud experience is advantageous but not essential technical and analytical capability is key. Demonstrates resilience and adaptability when faced with competing priorities. Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies