Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
noida, uttar pradesh
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group Customer Experience (CXP) and work on something highly strategic to Microsoft. The goal of CXP Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organizations implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are hiring a passionate Principal SW Engineering Manager to lead a team of highly motivated and talented software developers building highly scalable data platforms and deliver services and experiences for empowering Microsofts customer, seller and partner ecosystem to be successful. This is a unique opportunity to use your leadership skills and experience in building core technologies that will directly affect the future of Microsoft on the cloud. In this position, you will be part of a fun-loving, diverse team that seeks challenges, loves learning and values teamwork. You will collaborate with team members and partners to build high-quality and innovative data platforms with full stack data solutions using latest technologies in a dynamic and agile environment and have opportunities to anticipate future technical needs of the team and provide technical leadership to keep raising the bar for our competition. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Responsibilities As a leader of the engineering team, you will be responsible for the following: - Build and lead a world class data engineering team. - Passionate about technology and obsessed about customer needs. - Champion data-driven decisions for features identification, prioritization and delivery. - Managing multiple projects, including timelines, customer interaction, feature tradeoffs, etc. - Delivering on an ambitious product and services roadmap, including building new services on top of vast amount data collected by our batch and near real time data engines. - Design and architect internet scale and reliable services. - Leveraging machine learning (ML) models knowledge to select appropriate solutions for business objectives. - Communicate effectively and build relationship with our partner teams and stakeholders. - Help shape our long-term architecture and technology choices across the full client and services stack. - Understand the talent needs of the team and help recruit new talent. - Mentoring and growing other engineers to bring in efficiency and better productivity. - Experiment with and recommend new technologies that simplify or improve the tech stack. - Work to help build an inclusive working environment. Qualifications Basic Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. - 12+ years of experience of building high scale enterprise Business Intelligence and data engineering solutions. - 3+ years of management experience leading a high-performance engineering team. - Proficient in designing and developing distributed systems on cloud platform. - Must be able to plan work, and work to a plan adapting as necessary in a rapidly evolving environment. - Experience using a variety of data stores, including data ETL/ELT, warehouses, RDBMS, in-memory caches, and document Databases. - Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. - A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. - Strong communication skills and proficiency with executive communications - Demonstrated ability to effectively lead and operate in cross-functional global organization Preferred Qualifications - Prior experience as an engineering site leader is a strong plus. - Proven success in recruiting and scaling engineering organizations effectively. - Demonstrated ability to provide technical leadership to teams, with experience managing large-scale data engineering projects. - Hands-on experience working with large data sets using tools such as SQL, Databricks, PySparkSQL, Synapse, Azure Data Factory, or similar technologies. - Expertise in one or more of the following areas: AI and Machine Learning. - Experience with Business Intelligence or data visualization tools, particularly Power BI, is highly beneficial,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
Logitech is the Sweet Spot for people who want their actions to have a positive global impact while having the flexibility to do it in their own way. As a Fraud Detection QA Specialist, you will play a pivotal role in safeguarding our organization's warranty policy by preventing AI-driven attacks and curbing fraudulent claims. In this dynamic role, you will work closely with cross-functional teams to implement and enhance fraud detection systems, ensuring the integrity of our warranty processes. Experience - 6 to 10 Years Job Responsibilities: - Develop, implement, and continuously improve fraud prevention strategies in collaboration with data scientists, analysts, and IT professionals. - Stay abreast of emerging AI-driven attack methods and proactively adjust fraud prevention measures. - Conduct rigorous quality assurance testing on fraud detection algorithms and systems to identify vulnerabilities and potential points of exploitation. - Work closely with development teams to implement robust testing protocols and ensure the effectiveness of countermeasures. - Utilize data analytics and anomaly detection techniques to identify unusual patterns or behaviors indicative of fraudulent claims. - Collaborate with data scientists to refine models for improved accuracy in detecting anomalies. - Ensure adherence to warranty policies and guidelines, identifying and addressing any deviations or suspicious activities. - Provide recommendations for policy enhancements to mitigate future risks. - Conduct thorough investigations into suspected fraudulent claims, document findings, and escalate issues as necessary. - Generate comprehensive reports on detected fraud, trends, and recommend improvements to senior management. - Collaborate with internal stakeholders to strengthen fraud prevention processes and ensure a cohesive approach. - Develop and deliver training programs to educate relevant teams on fraud detection best practices and evolving threats. - Foster a culture of awareness and vigilance regarding fraudulent activities across the organization. - Stay informed about industry best practices and advancements in fraud detection technologies, incorporating improvements into existing systems. - Proactively identify areas for continuous improvement in fraud detection processes. Logitech offers comprehensive and competitive benefits packages and flexible working environments designed to support your wellbeing and that of your loved ones. If you believe you are the right candidate for the opportunity, we encourage you to apply. Logitech values diversity and inclusivity and celebrates individual differences. We support a culture of good health that encourages physical, financial, emotional, intellectual, and social wellbeing. We provide a variety of benefits that vary based on location. If you need assistance with the application process or require alternative methods for applying, please contact us toll-free at +1-510-713-4866 for prompt assistance.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
The role requires strategic thinking and technical expertise with a strong background in financial crime detection and prevention, particularly using advanced analytical methodologies. You will be responsible for designing, developing, and deploying analytics/models to detect suspicious activities and financial crime. The ideal candidate will possess technical expertise, a strategic mindset for enhancing Transaction Monitoring effectiveness, and a good familiarity with compliance regulations in the financial sector. You will be expected to design, develop, and deploy models for anomaly detection, behavior profiling, network analysis, and predictive scoring for Transaction Monitoring Solutions. Additionally, you will act as a single Point of Contact for assigned AML Transaction Monitoring Modeling related matters. Your responsibilities will include data exploration, feature engineering, and ensuring that models are accurate, efficient, and scalable. Furthermore, you will support analytical processes to enhance Transaction Monitoring red flag monitoring and optimize cases for investigation through AI/ML models and analytical processes. You will also work on improving processes such as threshold tuning, reconciliation, segmentation, and optimization associated with the Transaction Monitoring function across various products. As the role holder, you will be accountable for ensuring that all processes/models follow the Bank's governance process, including Model Risk Policy Governance and Risk-Based Rule review. Key Responsibilities: - Conceptualize, design, support, and align relevant processes and controls to industry best practices, and address any compliance gaps. - Mentor and conduct training programs to bring new joiners and the team up to speed on new business requirements. - Provide endorsement for changes or remediation activities impacting AML Optimization models and engage with relevant stakeholders for deploying the changes to production. - Work on processes such as threshold tuning, reconciliation, segmentation, and optimization associated with the Transaction Monitoring function across various products. - Work towards the collective objectives and scorecard of the business function published periodically in the form of job and performance objectives. Skills and Experience: - Provide coaching to peers and new hires to ensure they are highly engaged and performing to their potential. - Promote and embed a culture of openness, trust, and risk awareness, where ethical, legal, regulatory, and policy-compliant conduct is the norm. - Apply Group and FCC policies and processes to manage risks effectively. - Engage with Business/Segment stakeholders to understand emerging risks and ensure they are suitably addressed through Monitoring coverage. - Attend relevant business/segment/product-related working group meetings and ensure tracking and remediation of surveillance and investigations related regulatory findings. Qualifications: - 8+ years of hands-on experience in Transaction Monitoring design and development, with at least 5 years focused on financial crime threat risk mitigation. - Strong background in deploying models within TM or compliance environment with a solid understanding of AML/CFT regulations. - Strong coding skills in Python, R, SQL, and familiarity with data engineering practices for model integration.,
Posted 3 days ago
4.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist to join our innovative Data Science team. Reporting to the Data Science Director, you will contribute to the development of advanced Machine Learning (ML) solutions for cybersecurity challenges, including threat detection, malware analysis, and anomaly detection. Your expertise will help drive end-to-end ML product development, from data preparation to deployment, while ensuring seamless integration into our core products. What You Will Do: As a Senior Data Scientist, you will work in a team of smart data scientists reporting to the Data Science Director that does full-lifecycle full-stack Machine Learning product development, from feature engineering to model building and evaluation. Our team's use cases include but are not limited to threat detection, threat hunting, malware detection and anomaly detection, and MLOps. You will work with other Senior Data Scientists in the team to execute data science projects. You will identify issues with models running in production and resolve them. This may require retraining models from scratch, adding new features to model, set-up automated model training and deployment pipelines. These models will be integrated into popular products of the company to show maximum impact. About You: A Master Degree or Equivalent degree in Machine Learning, Computer Science, or Electrical Engineering, Mathematics, Statistics In-Depth understanding of all major Machine Learning and Deep learning algorithms, supervised and unsupervised both Passion for leveraging ML/AI to solve real-world business problems 4-7 years of industry experience in one or more machine/deep learning frameworks 4-7 years of industry experience with Python/Pyspark and SQL Experience solving multiple business problems using Machine Learning Experience with various public cloud services (such as AWS, Google, Azure) and ML automation platforms (such as MLFlow) Should be able to drive end-to-end machine learning project with limited guidance Solid computer science foundation Good written and verbal communication Ph.D in Cyber Security/Machine Learning or related field will be an added advantage 4-7 years of industry experience in the field of Data Science/Machine learning Prior experience in solving cyber security problems using machine learning Familiarity with Security Domain will be a plus Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
Offshore Software Solutions is looking for a Data Scientist with 3-5 years of experience in analyzing health data from devices like smartwatches and fitness trackers. The role involves developing custom machine learning models using TensorFlow to derive actionable insights for improving health monitoring and user well-being. Your responsibilities will include analyzing health data from wearables, developing and deploying machine learning models, collaborating with cross-functional teams, preprocessing data, ensuring privacy standards compliance, optimizing models for accuracy, and presenting findings to stakeholders. Additionally, you will stay updated on the latest machine learning research related to health data from wearable devices. Requirements for this role include a Bachelor's or Masters degree in Data Science or related field, 3+ years of data analysis experience, expertise in TensorFlow, knowledge of deep learning and neural networks, proficiency in Python and data science libraries, familiarity with health data metrics, experience with data visualization tools and databases, and strong problem-solving skills. Preferred qualifications include working with wearable device APIs, understanding time-series analysis and anomaly detection, experience in edge computing, and good communication skills for effective collaboration. If you are passionate about leveraging data science to improve health monitoring and user well-being, and possess the required skills and qualifications, we encourage you to apply for this exciting opportunity at Offshore Software Solutions in Mohali, Punjab.,
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
NTT DATA is looking for a Systems Integration Specialist to join their team in Bangalore, Karntaka (IN-KA), India. As a Systems Integration Specialist, your main responsibility will be to build machine learning models for predicting asset health, process anomalies, and optimizing operations. You will also work on sensor data pre-processing, model training, and inference deployment. Collaboration with simulation engineers for closed-loop systems will also be a part of your role. The ideal candidate should have 4-6 years of experience and possess the following skills: - Experience in time-series modeling, regression/classification, and anomaly detection. - Familiarity with Python, scikit-learn, TensorFlow/PyTorch. - Experience with ML Ops on Azure ML, Databricks, or similar. - Understanding of manufacturing KPIs such as OEE, MTBF, and cycle time. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. With a diverse team of experts in more than 50 countries and a strong partner ecosystem, NTT DATA offers services including business and technology consulting, data and artificial intelligence solutions, industry-specific offerings, as well as development, implementation, and management of applications, infrastructure, and connectivity. As a part of the NTT Group, which invests significantly in R&D, NTT DATA is at the forefront of digital and AI infrastructure globally. If you are an exceptional, innovative, and passionate individual looking to be part of an inclusive and forward-thinking organization, consider applying to NTT DATA for the Systems Integration Specialist role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
You will be joining the Microsoft Security organization, where security is a top priority due to the increasing digital threats, regulatory scrutiny, and complex estate environments. Microsoft Security aims to make the world a safer place by providing end-to-end security solutions to empower users, customers, and developers. As a Senior Data Scientist, you will be instrumental in enhancing our security posture by developing innovative models to detect and predict security threats. This role requires a deep understanding of data science, machine learning, and cybersecurity, along with the ability to analyze large datasets and collaborate with security experts to address emerging threats and vulnerabilities. Your responsibilities will include understanding complex cybersecurity and business problems, translating them into well-defined data science problems, and building scalable solutions. You will develop and deploy production-grade AI/ML systems for real-time threat detection, analyze large datasets to identify security risks, and collaborate with security experts to incorporate domain knowledge into models. Additionally, you will lead the design and implementation of data-driven security solutions, mentor junior data scientists, and communicate findings to stakeholders. To qualify for this role, you should have experience in developing and deploying machine learning models for security applications, preferably in a Big Data or cybersecurity environment. You should be familiar with the Azure tech stack, have knowledge of anomaly detection and fraud detection, and possess expertise in programming languages such as Python, R, or Scala. A Doctorate or Master's Degree in a related field, along with 5+ years of data science experience, is preferred. Strong analytical, problem-solving, and communication skills are essential, as well as proficiency in machine learning frameworks and cybersecurity principles. Preferred qualifications include additional experience in developing machine learning models for security applications, familiarity with data science workloads on the Azure tech stack, and contributions to the field of data science or cybersecurity. Your ability to drive large-scale system designs, think creatively, and translate complex data into actionable insights will be crucial in this role.,
Posted 1 week ago
8.0 - 12.0 years
18 - 33 Lacs
Pune
Hybrid
Role & responsibilities Job Description: As a Senior Data Scientist specializing in Fraud and Anomaly Detection , you will play a pivotal role in developing and implementing advanced models and algorithms to detect fraudulent activities and anomalies within complex datasets. You will leverage your expertise in Graph Anomaly Detection and Graph Neural Networks to enhance our anomaly detection capabilities. Key Responsibilities: o Experience: o Candidate must have hands-on experience in the field of Anomaly/Fraud/Outlier Detection and the connected technologies. Advanced Model Development and Implementation: o Design and develop sophisticated machine learning models specifically tailored for fraud and anomaly detection using state-of-the-art techniques. Implement Graph Neural Networks (GNNs) using PyTorch Geometric to capture complex relationships and dependencies in graph-structured data. o Develop algorithms for Graph Anomaly Detection, focusing on node, edge, and subgraph anomaly identification. o Have a know-how of various Graph Neural Network algorithms like GCN, GraphSAGE, GAT, GINE, Graph Message Passing, Hetero Graph Learning etc. o Have very good understanding and know how of various NLP techniques and models like BERT, Text Classification, Text Extraction, LLMs and LLM Fine-Tuning etc. Data Analysis and Feature Engineering: o Perform in-depth exploratory data analysis (EDA) to uncover hidden patterns and insights related to fraudulent activities. o Engineer features from large-scale datasets, including constructing graph-based features such as node embeddings, edge attributes, and graph metrics. Utilize techniques such as spectral clustering and community detection to enhance feature representation in graph data. o Utilize various nlp techniques for data cleaning, visualization, data pre-processing and data preparation o Research and Innovation: o Conduct cutting-edge research on emerging methodologies in fraud detection, anomaly detection, and graph-based machine learning. INTERNAL o Experiment with novel architectures and algorithms, such as attention mechanisms in GNNs, to improve detection capabilities. o Prototype and evaluate new approaches using rigorous experimental design and statistical validation. o Collaboration and Communication: o Work closely with other data scientists to ensure seamless integration of models into production environments, optimizing for scalability and performance. o Collaborate with software developers to implement efficient data pipelines and real-time processing systems. Technical Skills: o Programming and Libraries o Proficiency in Python, with extensive experience in libraries such as PyTorch, PyTorch Geometric, TensorFlow, Keras, Transformers and Scikit-learn. o Candidate must know how to work with Graph Based Machine Learning and NLP. o Experience with graph processing frameworks and libraries, such as NetworkX and DGL (Deep Graph Library). o Machine Learning and Deep Learning: o Strong understanding of machine learning algorithms, including supervised, unsupervised, and semi-supervised learning techniques. o Expertise in deep learning architectures, particularly those applicable to graph data, such as Graph Neural Networks like GAT, GraphSAGE etc Preferred candidate profile
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As an Enterprise IT Security Analyst Cloud and Endpoints, you will play a crucial role in ensuring the security of our cloud environments, specifically across AWS or Azure. Your primary responsibilities will revolve around collaborating with DevOps and IT teams to implement and oversee security measures, identify and mitigate risks, and ensure compliance with industry standards. Your key responsibilities will include: - Utilizing Microsoft Defender for Cloud and EDR tools like SentinelOne, CrowdStrike, or Microsoft Defender for Endpoint to enhance security measures. - Applying AI coding techniques for anomaly detection, threat prediction, and automated response systems. - Managing Microsoft Defender for Cloud to safeguard Azure environments. - Leveraging Endpoint Detection and Response (EDR) tools for threat detection and response. - Designing, implementing, and managing security solutions across AWS, Azure, and GCP. - Employing AWS security capabilities such as AWS Inspector, WAF, GuardDuty, and IAM for cloud infrastructure protection. - Implementing Azure security features including Azure Security Center, Azure Sentinel, and Azure AD. - Managing security configurations and policies across GCP using tools like Google Cloud Armor, Security Command Center, and IAM. - Conducting regular security assessments and audits to ensure vulnerability identification and compliance. - Developing and maintaining security policies, procedures, and documentation. - Collaborating with cross-functional teams to integrate security best practices into the development lifecycle. - Monitoring and responding to security incidents and alerts. - Implementing and managing Cloud Security Posture Management (CSPM) solutions with tools like Prisma Cloud, Dome9, and AWS Security Hub to continuously enhance cloud security posture. - Utilizing Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and ARM templates for cloud infrastructure automation and management. Qualifications: Must Have Qualifications: - Bachelor's degree in computer science, Information Technology, or a related field. - 1-3 years of experience in cloud security engineering. - Proficiency in AWS security capabilities. - Strong skills in Terraform for Infrastructure as Code (IaC). - Experience with Cloud Security Posture Management (CSPM) tools. - Familiarity with Web Application Firewall (WAF). - Relevant certification such as CISSP or AWS Certified Security Specialty or similar. Good to Have Qualifications: - Additional experience with AWS security capabilities. - Strong understanding of cloud security frameworks and best practices. - Proficiency in Infrastructure as Code (IaC) tools like CloudFormation and ARM templates. - Experience with AI coding and applying machine learning techniques to security. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills. This role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore. The position follows a hybrid work model with office presence on Tuesdays, Wednesdays, and Thursdays, and remote work on Mondays and Fridays. The work timings are from 1 PM to 10 PM IST, with cab pickup and drop facility available. Candidates based in Bangalore are preferred.,
Posted 1 week ago
5.0 - 8.0 years
15 - 30 Lacs
Bengaluru
Hybrid
Machine Learning Engineer Python, with proficiency in ML libraries such as scikit-learn, TensorFlow, PyTorch Experience: 5+ years Location: Fujitsu India R&D Center Position Type: Full-time | R&D About Fujitsu R&D Fujitsu is a multibillion-dollar global corporation with approximately 124,000 professionals in multiple countries. Fujitsu has served as a trusted partner to the worldwide telecommunications industry for more than 85 years. More information about Fujitsu’s network products can be found at: https://www.fujitsu.com/us/products/network/ Job Summary: We are looking for a Senior Software Engineer – AI/ML with deep expertise in developing and deploying cloud-native machine learning solutions. This role demands strong proficiency in Python-based ML libraries, design of scalable microservices, and deep knowledge of NoSQL databases. The ideal candidate will have hands-on experience with ML algorithms, model performance tuning, model accuracy optimization, and proven delivery of AI applications in areas such as anomaly detection and time series forecasting. You will work closely with data scientists, cloud engineers, and product teams to build intelligent systems that operate at scale and deliver real-time insights into complex, high-volume data. Key Responsibilities: Design, develop, and deploy AI/ML microservices using Python in a cloud-native environment. Build scalable pipelines for training, tuning, and serving ML models in production. Integrate and manage NoSQL databases (e.g., MongoDB, ElasticSearch) for efficient storage and retrieval of unstructured or time-series data. Optimize model accuracy, latency, and throughput, including hyperparameter tuning, feature engineering, and profiling model performance. Lead the development of ML-based solutions for anomaly detection, time series forecasting, and predictive analytics. Collaborate with cross-functional teams to translate product requirements into ML-based features. Apply best practices in model versioning, A/B testing, and continuous training/validation. Ensure high standards of code quality, modularity, and observability in deployed services. Evaluate new tools, technologies, and frameworks for ML lifecycle management and monitoring. Required Skills: 5+ years of experience in backend software development using Python . Strong programming expertise in Python , with proficiency in ML libraries such as scikit-learn, TensorFlow, PyTorch , XGBoost, or LightGBM. Demonstrated experience in implementing anomaly detection algorithms and time series forecasting models (e.g., ARIMA, Prophet, LSTM). Good understanding of AI/ML algorithms like Random Forest, K-Means, Autoencoders, Graph Neural Networks (GNNs), and Louvain for anomaly detection, clustering, and time-series analysis. Experience in building and deploying cloud-native microservices (e.g., on AWS, Azure, GCP). Solid understanding of NoSQL databases like MongoDB, ElasticSearch storing ML data and time series. Messaging bus like Kafka or RabbitMQ Hands-on experience with model performance tuning, evaluation metrics, and real-world ML system optimization. Familiarity with ML lifecycle tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Understanding of containerization and orchestration (Docker, Kubernetes) for scalable deployment. Proficient in working with Git, CI/CD workflows, and Agile development methodologies. Experience with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Familiarity with Agile methodologies and ticketing systems (JIRA). Nice to Have: Experience applying ML to wireless network optimization. Familiarity with federated learning or edge AI techniques to enable distributed ML across radio/access nodes. Understanding of online learning or reinforcement learning for dynamic network adaptation and control. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Posted 1 week ago
1.0 - 5.0 years
3 - 7 Lacs
Mumbai, Ahmedabad
Work from Office
Job Summary: We are looking for an experienced SAS Visual Investigator (SAS VI) Developer to support our fraud detection, risk management, and compliance initiatives. The ideal candidate will leverage SAS VI to analyze large datasets, identify suspicious patterns, generate alerts, and build visual interfaces for real-time investigation. This role is critical in enhancing our enterprise fraud management capabilities and delivering actionable insights to business and compliance teams. Key Responsibilities: Design, develop, and implement solutions using SAS Visual Investigator for fraud detection, risk management, and anomaly detection. Analyze large and complex datasets to investigate suspicious activities and uncover potential fraudulent behavior. Build and maintain dynamic dashboards, visualizations, and alerts within SAS VI to support real-time decision-making. Integrate SAS VI with other systems, ensuring seamless data flow and operational effectiveness. Collaborate with compliance, IT, and analytics teams to support investigative workflows and reporting needs. Ensure high data quality, integrity, and security through rigorous validation and testing. Continuously monitor, evaluate, and enhance detection models and SAS VI configurations to improve performance. Stay current with evolving industry trends, fraud typologies, and regulatory standards in fraud risk and compliance. Qualifications: Proven hands-on experience with SAS Visual Investigator (SAS VI), preferably in a fraud or risk context. Strong understanding of fraud detection methodologies and compliance practices. Proficiency in data analysis, pattern recognition, and anomaly detection using SAS tools. Experience in developing and maintaining interactive dashboards and alert systems. Ability to work with large datasets and conduct detailed forensic data analysis. Knowledge of the SAS ecosystem including SAS Visual Analytics, SAS Data Integration, or SAS Viya, is a plus. Strong communication and documentation skills with the ability to translate findings for non-technical stakeholders. Experience integrating SAS VI with external systems and APIs is advantageous.
Posted 1 week ago
5.0 - 8.0 years
22 - 27 Lacs
Bengaluru
Work from Office
About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We are looking for a Staff Software Development Engineer to our Shared Platform Services team. Reporting to the Director of Software Engineering, you'll be responsible for: Designing, building, and scaling our cloud data analytics platform, responsible for the ingestion, processing, and querying of terabytes of endpoint telemetry Engineering and maintaining a scalable alerting and incident detection engine using Python and workflow orchestrators Creating and managing insightful Grafana dashboards and other visualizations that provide clear, actionable views of our system’s health and performance for engineering, support, and leadership Optimizing our data platform for cost, performance, and reliability. You will own the architecture and mentor other engineers, building a system that is robust and enhanceable by the entire team What We're Looking for (Minimum Qualifications) 5+ years of professional experience in a data engineering, backend, or SRE role with a focus on large-scale data Expert-level proficiency in a query language like SQL or Kusto Query Language (KQL), with proven experience writing complex queries and optimizing their performance Strong programming skills in Python, particularly for data processing, automation, and building data pipelines Hands-on experience with at least one major cloud provider (Azure is highly preferred) Demonstrated experience building and managing systems for the following: big data analytics, time-series monitoring, alerting/anomaly detection, or data visualization What Will Make You Stand Out (Preferred Qualifications) Direct experience with our current stack: Azure Data Explorer (ADX), Grafana, and Apache Airflow Familiarity with infrastructure as code (IaC) tools like Terraform or Bicep #LI-HYBRID #LI-GL2 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Posted 1 week ago
12.0 - 21.0 years
8 - 12 Lacs
Chennai
Work from Office
Project Overview The candidate will be working on the Model Development as a Service (MDaaS) initiative, Which focuses on scaling machine learning techniques for exception classification, early warning signals, Data quality control, model surveillance, and missing value imputation. The project involves applying advanced ML techniques to large datasets and integrating them into financial analytics systems. Key Responsibilities Set up Data Pipelines: Configure storage in cloud-based compute environments and repositories for large-scale data ingestion and processing. Develop and Optimize Machine Learning Models: Implement Machine Learning for Exception Classification (MLEC) to classify financial exceptions. Conduct Missing Value Imputation using statistical and ML-based techniques. Develop Early Warning Signals for detecting anomalies in multi-variate/univariate time-series financial data. Build Model Surveillance frameworks to monitor financial models. Apply Unsupervised Clustering techniques for market segmentation in securities lending. Develop Advanced Data Quality Control frameworks using TensorFlow-based validation techniques. Experimentation & Validation: Evaluate ML algorithms using cross-validation and performance metrics. Implement data science best practices and document findings. Data Quality and Governance: Develop QC mechanisms to ensure high-quality data processing and model outputs. Required Skillset Strong expertise in Machine Learning & AI (Supervised & Unsupervised Learning). Proficiency in Python, TensorFlow, SQL, and Jupyter Notebooks. Deep understanding of time-series modeling, anomaly detection, and risk analytics. Experience with big data processing and financial data pipelines. Ability to deploy scalable ML models in a cloud environment. Deliverables & Timeline Machine Learning for Exception Classification (MLEC): Working codes & documentation Missing Value Imputation: Implementation & validation reports Early Warning Signals: Data onboarding & anomaly detection models Model Surveillance: Fully documented monitoring framework Securities Lending: Clustering algorithms for financial markets Advanced Data QC: Development of a general-purpose QC library Preferred Qualifications Prior experience in investment banking, asset management, or trading desks. Strong foundation in quantitative finance and financial modeling. Hands-on experience with TensorFlow, PyTorch, and AWS/GCP AI services
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As an Enterprise IT Security Analyst Cloud and Endpoints, you will play a crucial role in ensuring the security of the cloud environments in either AWS or Azure. Your responsibilities will involve collaborating closely with the DevOps and IT teams to implement and manage security measures, identify risks, and ensure compliance with industry standards. You will be expected to have experience with Microsoft Defender for Cloud and Endpoint Detection and Response (EDR) tools such as SentinelOne, CrowdStrike, or Microsoft Defender for Endpoint. Furthermore, you will apply AI coding techniques to enhance security measures, implement Microsoft Defender for Cloud for Azure protection, and utilize EDR tools for threat detection and response. Designing, implementing, and managing security solutions across various cloud platforms like AWS, Azure, and GCP will be a key part of your role. Utilizing security capabilities specific to each platform, such as AWS Inspector, WAF, GuardDuty, Azure Security Center, Sentinel, and IAM, will be essential in safeguarding the cloud infrastructure. Regular security assessments, audits, and the development of security policies and documentation will also fall within your responsibilities. Collaborating with cross-functional teams to integrate security best practices into the development lifecycle, monitoring and responding to security incidents, and managing Cloud Security Posture Management (CSPM) solutions using tools like Prisma Cloud and AWS Security Hub will be crucial aspects of your role. You should hold a Bachelor's degree in computer science, Information Technology, or a related field, along with 1-3 years of experience in cloud security engineering. Proficiency in AWS security capabilities, Azure AD, Microsoft Defender, M365, Exchange security, and Terraform for Infrastructure as Code (IaC) is required. Relevant certifications such as CISSP or AWS Certified Security Specialty will be beneficial. Additional qualifications that would be advantageous include experience with cloud security frameworks, Infrastructure as Code (IaC) tools like CloudFormation and ARM templates, AI coding, and machine learning techniques applied to security. Strong problem-solving skills, attention to detail, and effective communication and collaboration abilities are also desired. This position is based at The Leela Office in Bangalore, with a hybrid work model of 3 days in the office and 2 days remote work. The work timings are from 1 pm to 10 pm IST, with cab pickup and drop facilities available. Candidates based in Bangalore are preferred for this role.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
As a Senior Director of QE Automation at Mobileum, you will play a pivotal role in leading and managing our Quality Engineering (QE) and Automation teams. Your responsibilities will include overseeing quality, risk management, and automation strategies across core network technologies. You will lead a team of Senior Software Development Engineers in Test (SDETs) and collaborate with cross-functional teams to ensure the performance, security, and scalability of our systems. Your expertise in backend tools, frameworks, DevOps, and core network technologies will be crucial in maintaining the reliability and efficiency of our systems. In this leadership role, you will drive advanced test automation strategies by leveraging the latest AI-powered testing tools to secure, scale, and enhance the performance and reliability of our systems. You will be responsible for architecting and evolving automation strategies that incorporate AI/ML-driven testing solutions, such as autonomous test generation, self-healing scripts, and anomaly detection. Partnering with cross-functional teams, you will embed quality into all stages of the Software Development Life Cycle (SDLC), ensuring comprehensive test coverage for functional, non-functional, and security requirements. Your role will involve designing and extending modular, scalable automation frameworks for APIs, microservices, mobile apps, and distributed systems with a focus on test reusability and intelligence. You will drive test data management and generation strategies to simulate realistic production scenarios and enhance the reliability of automated tests. Additionally, you will apply advanced performance, scalability, and chaos testing techniques to validate system behavior under extreme and unexpected conditions. To excel in this position, you should demonstrate deep expertise in quality engineering, core telecom protocols, and emerging trends in AI-led QA innovation. You will promote a culture of automation-first, risk-aware, and high-quality software delivery within the organization. Your responsibilities will also include continuously evaluating and onboarding next-gen automation tools and platforms, championing the integration of test automation with CI/CD pipelines and DevSecOps workflows, and leading the evolution of a metrics-driven quality culture using key performance indicators (KPIs). The ideal candidate for this role will have at least 15 years of experience in software quality engineering, with a minimum of 5 years in leadership roles within telecom, cloud, or analytics-driven product companies. You should possess proven abilities in designing and scaling test automation solutions using modern programming languages and tools, hands-on experience with AI-enabled QA platforms, deep expertise in telecom network protocols, familiarity with big data and streaming platforms, and a strong command over cloud-native testing practices. A Bachelor's or Master's degree in Computer Science or a related field is required for this role. The position is based in Bangalore, India. Join us at Mobileum to lead the way in advancing quality engineering, risk management, and automation strategies for cutting-edge core network technologies.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Job Description: Dreaming big is in our DNA. It's who we are as a company, our culture, our heritage, and more than ever, our future. A future where we're always looking forward, serving up new ways to meet life's moments, and where we keep dreaming bigger. We look for individuals with passion, talent, and curiosity and provide them with teammates, resources, and opportunities to unleash their full potential. The power we create together, when we combine your strengths with ours, is unstoppable. Are you ready to join a team that dreams as big as you do AB InBev GCC, incorporated in 2014 as a strategic partner for Anheuser-Busch InBev, leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do you dream big We need you. Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Purpose of Role: - Understand and solve complex business problems with sound analytical prowess to provide impactful insights in decision-making. - Ensure any implementation roadblocks are communicated to the Analytics Manager to prevent project timeline delays. - Document every aspect of the project in a standard way for future reference. - Articulate technical complexities to senior leadership in a simple and easy-to-understand manner. Key Tasks and Accountabilities: - Understand business problems and collaborate with stakeholders to translate them into data-driven analytical/statistical problems during the solution-building process. - Create appropriate datasets and develop statistical data models. - Translate complex statistical analyses over large datasets into insights and actionable steps. - Analyze results and present findings to stakeholders. - Communicate insights using business-friendly presentations. - Mentor other Data Scientists/Associate Data Scientists. - Build a project pipeline in Databricks that is production-ready. - Develop dashboards (preferably in Power BI) for easy consumption of solutions. Qualifications, Experience, Skills: - Level of Educational Attainment Required: Bachelor's/Master's Degree in Statistics, Applied Statistics, Economics, Econometrics, Operations Research, or any other quantitative discipline. - Previous Work Experience: Minimum 4-6 years in a data science role, building, implementing, and operationalizing end-to-end solutions. - Expertise strongly desired in building statistical and machine learning models for classification, regression, forecasting, anomaly detection, dimensionality reduction, clustering, etc. - Exposure to optimization and simulation techniques (good to have). - Expertise in building NLP-based language models, sentiment analysis, text summarization, and Named Entity Recognition. - Proven skills in translating statistics into insights, with sound knowledge in statistical inference and hypothesis testing. - Mandatory: Microsoft Office, Expert in Python, Advanced Excel. - An undying love for beer!,
Posted 1 week ago
3.0 - 7.0 years
7 - 11 Lacs
Karnataka
Work from Office
Responsibilities: Problem Definition & Hypothesis Generation: Work closely with business stakeholders, product managers, and engineers to understand key business challenges and formulate data-driven hypotheses. Data Collection & Preparation: Identify, collect, and prepare large, complex datasets from various sources. This includes data cleaning, transformation, feature engineering, and ensuring data quality and integrity. Exploratory Data Analysis (EDA): Perform in-depth statistical analysis and visualization to uncover trends, patterns, outliers, and relationships within data, generating actionable insights. Model Development & Evaluation: Design, develop, and implement machine learning models (e.g., supervised, unsupervised, reinforcement learning) for various applications such as prediction, classification, recommendation, anomaly detection, and natural language processing. Select appropriate algorithms, fine-tune model parameters, and perform rigorous model validation and evaluation (e.g., cross-validation, A/B testing). Monitor model performance post-deployment and iterate for continuous improvement. Statistical Analysis & Experimentation: Apply advanced statistical techniques to analyze experimental results, identify causal relationships, and provide recommendations based on data-driven evidence. Insight Generation & Communication: Translate complex analytical findings and model results into clear, concise, and actionable insights for non-technical audiences through presentations, reports, and dashboards. Deployment & MLOps: Collaborate with MLOps engineers and software developers to productionize models, ensure scalability, reliability, and maintainability. Research & Innovation: Stay updated with the latest advancements in data science, machine learning, and AI, and proactively identify opportunities to apply new techniques to solve business problems. Documentation: Document code, models, methodologies, and findings thoroughly for reproducibility and knowledge sharing.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Invent, implement, and deploy state-of-the-art machine learning and/or specific domain industry algorithms and systems. Build prototypes and explore conceptually new solutions. Work collaboratively with science, engineering, and product teams to identify customer needs in order to create and implement solutions, promote innovation, and drive model implementations. Apply data science capabilities and research findings to create and implement solutions at scale. Responsible for developing new intelligence around core products and services through applied research on behalf of customers. Develop models, prototypes, and experiments that pave the way for innovative products and services. Build cloud services that work out-of-the-box for enterprises, e.g., decision support, anomaly detection, forecasting, and recommendations, natural language processing (NLP), Natural Language Understanding (NLU), Time Series, Automatic Speech Recognition (ASR), Machine Learning (ML), and Computer Vision (CV). Design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. Stay conversant on ethical problems in consideration of sciences. Drives and plans the implementation of company policy for achieving business goals. Define the bar for science practices and help teams achieve those goals. Identify and mitigate risks across the full set of systems, particularly at the intersection of business and engineering. Innovate AI and ML-powered solutions (rich APIs, ML models, and end-to-end services) with strategic ISVs and customers. Develop deep product intuition to influence future product roadmaps and drive decision-making. Clearly articulate technical work to audiences of all levels and across multiple functional areas in both internal and external settings. Engage in forward-looking research both internally and with academic institutions globally. Hire and mentor across the organization. Play an active role in team planning, review, and retrospective events. Ensure experiments are ready for hand-off to Software Developers to ship into production. May perform other duties as assigned. Career Level - IC5 As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry leaders in almost every sector and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As an AI/ML Specialist, you will be responsible for building intelligent systems utilizing OT sensor data and Azure ML tools. Your primary focus will be collaborating with data scientists, engineers, and operations teams to develop scalable AI solutions addressing critical manufacturing issues such as predictive maintenance, process optimization, and anomaly detection. This role involves bridging the edge and cloud environments by deploying AI solutions to run effectively on either cloud platforms or industrial edge devices. Your key functions will include designing and developing ML models using time-series sensor data from OT systems, working closely with engineering and data science teams to translate manufacturing challenges into AI use cases, implementing MLOps pipelines on Azure ML, and integrating with Databricks/Delta Lake. Additionally, you will be responsible for deploying and monitoring models at the edge using Azure IoT Edge, conducting model validation, retraining, and performance monitoring, as well as collaborating with plant operations to contextualize insights and integrate them into workflows. To qualify for this role, you should have a minimum of 5 years of experience in machine learning and AI. Hands-on experience with Azure ML, ML flow, Databricks, and PyTorch/TensorFlow is essential. You should also possess a proven ability to work with OT sensor data such as temperature, vibration, flow, etc. A strong background in time-series modeling, edge inferencing, and MLOps is required, along with familiarity with manufacturing KPIs and predictive modeling use cases.,
Posted 2 weeks ago
4.0 - 8.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting edge cloud technologies. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize that every employee’s contributions are integral to our company’s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation, and benefits, and rewarding career opportunities. Come share, grow and learn with us – you’ll be challenged, you’ll have fun and you’ll be part of a fast growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Have you got what it takes? Design, implement, and maintain real-time and batch data pipelines for fraud detection systems. Automate data ingestion from transactional systems, third-party fraud intelligence feeds, and behavioral analytics platforms. Ensure high data quality, lineage, and traceability to support audit and compliance requirements. Collaborate with fraud analysts and data scientists to deploy and monitor machine learning models in production. Monitor pipeline performance and implement alerting for anomalies or failures. Ensure data security and compliance with financial regulations Qualifications: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. 4-6 years of experience in DataOps role, preferably in fraud or risk domains. Strong programming skills in Python and SQL. Knowledge of financial fraud patterns, transaction monitoring, and behavioral analytics. Familiarity with fraud detection systems, rules engines, or anomaly detection frameworks. Experience with AWS cloud platforms Understanding of data governance, encryption, and secure data handling practices. Experience with fraud analytics tools or platforms like Actimize What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7822 Reporting into: Director Role Type: Tech Manager
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As an AI/ML Manager at our Pune location, you will be responsible for leading the development of machine learning proof of concepts (PoCs) and demos using structured/tabular data for various use cases like forecasting, risk scoring, churn prediction, and optimization. Your role will involve collaborating with sales engineering teams to understand client requirements and presenting ML solutions during pre-sales calls and technical workshops. You will be expected to build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML, managing training, tuning, evaluation, and model packaging. Applying supervised, unsupervised, and semi-supervised techniques like XGBoost, CatBoost, k-Means, PCA, and time-series models will be a key part of your responsibilities. Working closely with data engineering teams, you will define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Packaging and documenting ML assets for scalability and transition into delivery teams post-demo will be essential. Staying updated with the latest best practices in ML explainability, model performance monitoring, and MLOps practices is also expected. Participation in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes are additional aspects of this role. To qualify for this position, you should have at least 8+ years of experience in developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python, pandas, scikit-learn, and ML libraries like XGBoost, CatBoost, LightGBM is required. Familiarity with cloud-based ML environments such as AWS SageMaker, Azure ML, or Databricks is preferred. A solid understanding of feature engineering, model tuning, cross-validation, and error analysis is necessary. Experience with unsupervised learning, clustering, anomaly detection, and dimensionality reduction techniques will be beneficial. You should be comfortable presenting models and insights to both technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts, including model versioning, deployment automation, and drift detection, will be an advantage. If you are interested in this opportunity, please apply or share your resume at kanika.garg@austere.co.in.,
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
hyderabad, telangana
On-site
You will be involved in building a machine learning-based anomaly detection system using structured and sequential sensor data. Your main task will be to identify unusual patterns or faults through data modeling and visualization. This internship will provide you with a real-world opportunity to work on machine learning pipelines and understand both supervised and unsupervised approaches to anomaly detection. During this phase, you will focus on offline/static modeling using historical sensor data in tabular and time-series formats. Your primary objectives will include analyzing sensor datasets representing various operational scenarios, applying and evaluating supervised classification models, transitioning into unsupervised anomaly detection approaches, visualizing insights, and documenting findings for technical and non-technical audiences. Your key responsibilities will involve performing data preprocessing tasks such as cleaning, encoding, normalization, and feature engineering. Additionally, you will be required to train and evaluate classification models using Artificial Neural Networks (ANN) and Long Short-Term Memory (LSTM) models for sequence-based classification. You will also explore and implement unsupervised anomaly detection techniques like Isolation Forest, One-Class SVM, and Z-score or IQR-based statistical methods. Analyzing and visualizing model outputs using confusion matrices, anomaly heatmaps, and time-series plots will also be part of your responsibilities. Optionally, you may also be tasked with building a lightweight dashboard (e.g., using Streamlit) to present findings. TechnoExcel is the leading training and consulting company in Hyderabad offering data analytics solutions.,
Posted 2 weeks ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Edge AI Data Scientists will be responsible for designing, developing, and validating machine learning modelsparticularly in the domain of computer visionfor deployment on edge devices. This role involves working with data from cameras, sensors, and embedded platforms to enable real-time intelligence for applications such as object detection, activity recognition, and visual anomaly detection. The position requires close collaboration with embedded systems and AI engineers to ensure models are lightweight, efficient, and hardware-compatible. Candidate Requirements Education Bachelor"s or Masters degree in Data Science, Computer Science, or a related field. Experience 3+ years of experience in data science or machine learning with a strong focus on computer vision. Experience in developing models for edge deployment and real-time inference. Familiarity with video/image datasets and deep learning model training. Skills Proficiency in Python and libraries such as OpenCV, PyTorch, TensorFlow, and FastAI. Experience with model optimization techniques (quantization, pruning, etc.) for edge devices. Hands-on experience with deployment tools like TensorFlow Lite, ONNX, or OpenVINO. Strong understanding of computer vision techniques (e.g., object detection, segmentation, tracking). Familiarity with edge hardware platforms (e.g., NVIDIA Jetson, ARM Cortex, Google Coral). Experience in processing data from camera feeds or embedded image sensors. Strong problem-solving skills and ability to work collaboratively with cross-functional teams. Your Profile Responsibilities Develop and train computer vision models tailored for constrained edge environments. Analyze camera and sensor data to extract insights and build vision-based ML pipelines. Optimize model architecture and performance for real-time inference on edge hardware. Validate and benchmark model performance on various embedded platforms. Collaborate with embedded engineers to integrate models into real-world hardware setups. Stay up-to-date with state-of-the-art computer vision and Edge AI advancements. Document models, experiments, and deployment configurations. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and iCa sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough