Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai/Kovilpatti location for the initial Three-month project training and execution period. After the Three months , the candidate will be offered remote opportunities.* The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills · ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost · Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS · NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain · Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker · Data visualization: Power BI, Tableau, Plotly, Seaborn · SQL and NoSQL (CosmosDB, MongoDB) · Git, CI/CD tools, and model monitoring platforms Qualification · Master’s in Data Science, Statistics, Mathematics, or Computer Science · Microsoft Certified: Azure Data Scientist Associate or equivalent · Proven success in delivering production-ready ML models with measurable business impact · Publications or patents in AI/ML will be considered a strong advantage Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025
Posted 3 weeks ago
30.0 years
9 - 10 Lacs
Hyderābād
On-site
Schrödinger is a science and technology leader with over 30 years of experience developing software solutions for physics-based and machine learning-based chemical simulations and predictive analyses. We’re seeking an application-focused Materials Informatics & Optoelectronics Scientist to join us in our mission to improve human health and quality of life through the development, distribution, and application of advanced computational methods. As a member of our Materials Science team, you’ll have the opportunity to work on diverse projects in optoelectronics, catalysis, energy storage, semiconductors, aerospace, and specialty chemicals. Who will love this job: A statistical and machine learning expert with robust problem-solving skills A materials science enthusiast who’s familiar with RDkit, MatMiner, Dscribe, or other informatics packages A proficient Python programmer and debugger who’s familiar with machine learning packages like Scikit-Learn, Pandas, NumPy, SciPy, and PyTorch An experienced researcher with hands-on experience in extracting datasets using large language models (LLM) or Optical Character Recognition (OCR) technologies A specialist in quantum chemistry or materials science who enjoys collaborating with an interdisciplinary team in a fast-paced environment What you’ll do: Research, curate, and analyze datasets from literature and other sources using advanced techniques such as LLMs and OCR. Work with domain experts to ensure the accuracy and quality of data (such as molecular structures, SMILES strings, experimental measurements, etc) Develop and validate predictive machine learning models for OLED devices and other optoelectronic applications Communicate results and present ideas to the team Develop tools and workflows that can be integrated into commercial software products Validate existing Schrödinger machine learning products using public datasets or internally generated datasets What you should have: A PhD in Chemistry or Materials Science Hands-on experience with the application of machine learning, neural networks, deep learning, data analysis, or chemical informatics to materials and complex chemicals Experience with LLM, OCR technologies, and the extraction of datasets for ML model development As an equal opportunity employer, Schrödinger hires outstanding individuals into every position in the company. People who work with us have a high degree of engagement, a commitment to working effectively in teams, and a passion for the company's mission. We place the highest value on creating a safe environment where our employees can grow and contribute, and refuse to discriminate on the basis of race, color, religious belief, sex, age, disability, national origin, alienage or citizenship status, marital status, partnership status, caregiver status, sexual and reproductive health decisions, gender identity or expression, or sexual orientation. To us, "diversity" isn't just a buzzword, but an important element of our core principles and key business practices. We believe that diverse companies innovate better and think more creatively than homogenous ones because they take into account a wide range of viewpoints. For us, greater diversity doesn't mean better headlines or public images - it means increased adaptability and profitability.
Posted 3 weeks ago
4.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015. Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US , EMEA , and Asia . We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021. The Role This role provides an exciting opportunity to be at the forefront of AI innovation, contributing to the development and implementation of solutions that push the boundaries of technology. If you are passionate about AI, thrive in a collaborative environment, and enjoy driving impactful projects, this position offers a platform for creativity and professional growth. Join us in shaping the future of AI applications Responsibilities Integration: Collaborate with software engineers to deploy and integrate data models into production systems, ensuring scalability, reliability, and efficiency. Metrics Identification: Identify key business metrics, offering insights that inform decision-making processes and recommending product features when necessary. Concept Development and Design: Collaborate with cross-functional teams to brainstorm and refine ideas for AI-powered solutions and various NLP/ML use cases. Translate business requirements into technical specifications for models and algorithms. Design and develop innovative solutions that address specific business challenges and enhance user experience. Model Development and Optimization: Develop and implement machine learning models using Python, R, or other relevant programming languages. Employ deep learning techniques and algorithms to extract meaningful insights from large datasets. Continuously improve and optimize models to achieve high accuracy and performance. Exploratory Data Analysis and Insights: Conduct exploratory data analysis to uncover patterns, trends, and hidden relationships within data. Utilize statistical and data mining techniques to extract valuable insights from unstructured data. Generate actionable recommendations and insights for future product development and feature enhancements. Rapid Prototyping and Deployment: Quickly prototype AI solutions to test and validate their effectiveness. Collaborate with software engineers to seamlessly integrate AI models into production systems. Ensure the scalability, reliability, and efficiency of deployed AI solutions. Communication and Collaboration: Effectively communicate technical concepts and findings to non-technical stakeholders. Create clear and comprehensive documentation to capture project details, methodologies, and results. Collaborate with software engineers, product managers, and operations teams to bring AI solutions to life. Business Impact and Metrics: Identify key business metrics that can be positively impacted by AI solutions. Track and monitor the performance of AI models on relevant business metrics. Recommend product features and enhancements based on data-driven insights and AI-generated recommendations. Skills Quantitative and Problem-Solving Skills: Strong quantitative and problem-solving skills NLP and Data Science Libraries: Solid understanding of NLP and knowledge of essential data science libraries, including Pandas, Numpy, Scipy, and Scikit-Learn Python Programming: A good hands-on experience with Python used in data analysis and model building Distributed Computing Experience: Experience with Hadoop, Spark, or other distributed computing systems for large-scale training Machine Learning Techniques: Strong understanding of supervised (decision trees, random forests, boosting, etc.) and unsupervised ML techniques Experience And Qualifications Master's or Bachelor's degree in Machine Learning/Data Science, Applied Statistics, Mathematics, or Engineering Minimum 2+ years of relevant experience with a proven track record of developing ML solutions. Life At Zeta At Zeta, we want you to grow to be the best version of yourself by unlocking the great potential that lies within you. This is why our core philosophy is ‘People Must Grow.’ We recognize your aspirations; act as enablers by bringing you the right opportunities, and let you grow as you chase disruptive goals. is adventurous and exhilarating at the same time. You get to work with some of the best minds in the industry and experience a culture that values the diversity of thoughts. If you want to push boundaries, learn continuously and grow to be the best version of yourself, Zeta is the place to be! Explore the life at zeta Zeta is an equal opportunity employer. At Zeta, we are committed to equal employment opportunities regardless of job history, disability, gender identity, religion, race, marital/parental status, or another special status. We are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the roles and responsibilities.
Posted 3 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Skyroot Aerospace A cutting-edge startup founded by ex-ISRO scientists. Dedicated to affordable space access, we're rewriting aerospace technology rules. Our dynamic team fosters inventiveness, collaboration, and relentless excellence. Join us on a transformative journey to redefine space possibilities. Welcome to the forefront of space innovation with Skyroot Aerospace! Job Summary We are seeking a skilled and motivated RF Signal Processing Engineer with 3–4 years of hands-on experience in developing, simulating, and implementing algorithms for RF and communication systems. The ideal candidate will have a solid understanding of RF principles, digital signal processing (DSP), and practical experience with tools like MATLAB, Python, and SDR hardware platforms. Key Responsibilities: Design and implement signal processing algorithms for RF front-ends, receivers, and transmitters. Simulate and analyze RF signals and communication protocols (e.g., QPSK, OFDM, CDMA). Work on signal acquisition, filtering, detection, demodulation, and decoding techniques. Support system integration with hardware platforms including Software Defined Radios (SDRs). Perform RF measurements, lab testing, and validation of developed signal processing blocks. Develop and implement advanced radar signal processing algorithms including pulse compression, Doppler processing, CFAR detection, and tracking filters. Design and simulate radar waveforms for various applications such as SAR (Synthetic Aperture Radar), MTI (Moving Target Indicator), and FMCW radar systems. Analyze radar system performance and signal integrity under various noise, jamming, and multipath conditions. Optimize real-time signal processing pipelines using embedded platforms (DSPs, FPGAs, or GPUs). Collaborate with cross-functional teams including RF design, embedded systems, and firmware. Prepare technical documentation, reports, and presentations. Required Skills & Qualifications: B.E./B.Tech or M.E./M.Tech in Electronics, Communication, or a related field. 3–4 years of experience in RF systems and digital signal processing. Strong foundation in RF propagation, antenna theory, and communication systems. Proficient in MATLAB/Simulink, Python (NumPy/SciPy), and C/C++ for DSP implementations. Experience with SDR platforms such as USRP, HackRF, or similar. Knowledge of modulation/demodulation, FFT, filtering, error correction codes, etc. Familiarity with RF test equipment: spectrum analyzers, signal generators, VNA. Preferred Qualifications: Experience with GNURadio, LabVIEW, or FPGA-based signal processing. Background in radar signal processing or satellite communication systems. Familiarity with DO-254/DO-160 or similar aerospace standards (for aerospace/defense roles). Soft Skills: Strong analytical and problem-solving skills. Good communication and teamwork abilities. Willingness to learn and adapt to emerging technologies. Perks & Benefits: We provide Seamless Transportation, Nourishing Meals and Elevated Well-Being because everyone Deserves a Smooth Ride to Success! We also welcome women with career gaps and applicants from non-aerospace or defence sectors who can bring valuable skills and experiences to our team.
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Mavenir is building the future of networks and pioneering advanced technology, focusing on the vision of a single, software-based automated network that runs on any cloud. As the industry's only end-to-end, cloud-native network software provider, Mavenir is transforming the way the world connects, accelerating software network transformation for 250+ Communications Service Providers in over 120 countries, which serve more than 50% of the world’s subscribers. Role Summary What will you do Applied data science research to fight spam, scam and fraud attacks in SMS, MMS, e-mail and other mobile telecommunication protocols Helping mobile network operators worldwide in localization, identification, monetization and prevention of spam and fraud attacks Big Data analysis of Voice/SMS/MMS traffic (>100 million messages per day) Data cleaning and preprocessing (data wrangling), exploratory analysis, statistical analysis Machine learning, data mining, text mining in different languages Data visualization and presentation Uncovering activities of organized groups of spammers and fraudsters Researching new fraud techniques and designing algorithms for their detection and prevention Monitoring and preventing virus and malware distribution vectors in SMS/MMS Presenting results to customers, leading discussions about findings and best approaches to manage the fraud attacks . Key Responsibilities Key Responsibilities What will you work with Statistical tools – R-studio, python Mavenir’s solution for identification of fraud and spam in mobile networks Unique data sets (Voice/SMS/MMS/RCS communication from all around the world) State of the art fraud detection algorithms Core mobile network systems and technologies Linux OS Big data tools - Spark, ElasticSearch/OpenSearch, Kafka Data science and machine learning tooling - NumPy, SciPy, MLlib Job Requirements What we expect you already know/have Practical experience with statistical analysis or Business Intelligence Scripting languages (for example R, bash, python, perl, lua or similar) Data visualization and reporting Critical thinking and strong problem-solving skills Curiosity and willingness to learn new things Working proficiency in English We appreciate you already know/have - Machine learning and Linux Accessibility Mavenir is committed to working with and providing reasonable accommodation to individuals with physical and mental disabilities. If you require any assistance, please state in your application or contact your recruiter. Mavenir is an Equal Employment Opportunity (EEO) employer and welcomes qualified applicants from around the world, regardless of their ethnicity, gender, religion, nationality, age, disability, or other legally protected status.
Posted 3 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Mavenir is building the future of networks and pioneering advanced technology, focusing on the vision of a single, software-based automated network that runs on any cloud. As the industry's only end-to-end, cloud-native network software provider, Mavenir is transforming the way the world connects, accelerating software network transformation for 250+ Communications Service Providers in over 120 countries, which serve more than 50% of the world’s subscribers. Role Summary What will you do Applied data science research to fight spam, scam and fraud attacks in SMS, MMS, e-mail and other mobile telecommunication protocols Helping mobile network operators worldwide in localization, identification, monetization and prevention of spam and fraud attacks Big Data analysis of Voice/SMS/MMS traffic (>100 million messages per day) Data cleaning and preprocessing (data wrangling), exploratory analysis, statistical analysis Machine learning, data mining, text mining in different languages Data visualization and presentation Uncovering activities of organized groups of spammers and fraudsters Researching new fraud techniques and designing algorithms for their detection and prevention Monitoring and preventing virus and malware distribution vectors in SMS/MMS Presenting results to customers, leading discussions about findings and best approaches to manage the fraud attacks . Key Responsibilities Key Responsibilities What will you work with Statistical tools – R-studio, python Mavenir’s solution for identification of fraud and spam in mobile networks Unique data sets (Voice/SMS/MMS/RCS communication from all around the world) State of the art fraud detection algorithms Core mobile network systems and technologies Linux OS Big data tools - Spark, ElasticSearch/OpenSearch, Kafka Data science and machine learning tooling - NumPy, SciPy, MLlib Job Requirements What we expect you already know/have Practical experience with statistical analysis or Business Intelligence Scripting languages (for example R, bash, python, perl, lua or similar) Data visualization and reporting Critical thinking and strong problem-solving skills Curiosity and willingness to learn new things Working proficiency in English We appreciate you already know/have - Machine learning and Linux Accessibility Mavenir is committed to working with and providing reasonable accommodation to individuals with physical and mental disabilities. If you require any assistance, please state in your application or contact your recruiter. Mavenir is an Equal Employment Opportunity (EEO) employer and welcomes qualified applicants from around the world, regardless of their ethnicity, gender, religion, nationality, age, disability, or other legally protected status.
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You should have at least 2 years of experience as a Python Developer with a robust portfolio of projects. A Bachelor's degree in computer science, Software Engineering, or a related field is required for this role. An in-depth understanding of Python software development stacks, ecosystems, frameworks, and tools like Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn, and PyTorch is essential. Additionally, you should have experience in front-end development using HTML, CSS, and JavaScript, along with familiarity with database technologies such as SQL and NoSQL. Strong problem-solving skills, as well as effective communication and collaboration abilities, are also necessary. Preferred skills for this role include experience with the Flask framework, a working understanding of cloud platforms like AWS, Google Cloud, or Azure, and contributions to open-source Python projects or active involvement in the Python community.,
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
thane, maharashtra
On-site
Company Description WonderBiz Technologies Pvt. is dedicated to leveraging technology to help global companies improve operational efficiency and reduce costs. With over 8 years in business, we have developed more than 50 products for 30+ international customers, including Silicon Valley startups and Fortune 100 & 500 companies. Our growth-driven culture provides opportunities to work with a highly skilled development team, led by industry veterans with 20+ years of experience. We invest in our employees" growth to ensure top-notch service delivery to our clients. Role Description This is a full-time, on-site role for a Data Scientist located in Thane. The Data Scientist will be responsible for analyzing large datasets to extract meaningful insights, developing data models, performing statistical analyses, and creating data visualizations. The role will involve collaborating with cross-functional teams to understand business requirements and delivering data-driven solutions to meet those needs. Experience: 5-10 years Location: Thane - On site WFO Qualifications We are looking for a Machine Learning Specialist that will be responsible to do analysis and solving of increasingly complex problems proposing, reviewing, maintaining, and delivering detailed designs and technical solutions for our Product development projects. This role provides opportunities to work on a wide variety of projects covering areas including Edge and Cloud computing, developing distributed and fault tolerant systems, containerization etc. You should also have an in-depth understanding of the underlying technologies, and architectures work in 2020. Strong programming skills with proficiency in at least one programming language (preferred Python). Understanding of basic full-stack development concepts. Applying algorithms to generate accurate predictions, Resolving data set problems, Deep knowledge of math, probability, statistics and algorithms, proven experience for GenAI implementations in at least 1 project. Proficiency in data analysis techniques and tools. Working experience in Experience in implementing LLMs tools Expertise in SciPy for scientific and technical computing. Knowledge and experience in deep learning, TensorFlow, NLP and frameworks. Proficiency in at least one Database Management System (DBMS). Proven experience as a Machine Learning Engineer or similar role Understanding of data structures, data modeling and software architecture Deep knowledge of math, probability, statistics and algorithms Ability to write robust code in Python and R Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn) Excellent communication skills Ability to work in a team Outstanding analytical and problem-solving skill. Interested candidate can share their updated CV on manasi.deshmukh@wonderbiz.in,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Dreaming big is at the core of AB InBev's DNA. It defines the company, its culture, heritage, and future. The future at AB InBev is about constant innovation and seeking new ways to enhance life's moments. They are on the lookout for individuals who embody passion, talent, and curiosity. AB InBev provides the necessary support and opportunities for individuals to unleash their full potential, fostering a collaborative environment where the combined strengths create unstoppable power. If you are ready to be a part of a team that dreams as big as you do, then AB InBev welcomes you. AB InBev GCC, established in 2014, serves as a strategic partner for Anheuser-Busch InBev. The center harnesses the potential of data and analytics to drive growth in crucial business functions including operations, finance, people, and technology. The teams at AB InBev are dedicated to transforming operations through technology and analytics. Are you someone who dreams big and is ready to make a difference AB InBev is looking for you to join their team. **Job Title:** Senior ML Engineer **Location:** Bangalore **Reporting to:** Director Data Analytics **Purpose of the role:** AB InBev's Supply Analytics team is focused on developing innovative solutions that optimize brewery efficiency through data-driven insights. By leveraging advanced analytics and AI-driven solutions, the team aims to streamline processes, reduce waste, and enhance productivity. As a Senior ML Engineer, you will be tasked with overseeing the end-to-end deployment of machine learning models on edge devices. Your responsibilities will include model optimization, scaling complexities, containerization, and infrastructure management to ensure high availability and performance. **Key tasks & accountabilities:** - Lead the complete edge deployment lifecycle, from model training to deployment and monitoring on edge devices. - Develop and maintain a scalable Edge ML pipeline for real-time analytics at brewery sites. - Optimize and containerize models using Portainer, Docker, and Azure Container Registry (ACR) for efficient execution in constrained edge environments. - Manage the GitHub repository with well-documented and modularized code for seamless deployments. - Establish robust CI/CD pipelines for continuous integration and deployment of models and services. - Implement logging, monitoring, and alerting for deployed models to ensure reliability and quick failure recovery. - Ensure compliance with security and governance best practices for data and model deployment in edge environments. - Document the thought process and create artifacts for sharing with business and engineering teams. - Review code quality and design developed by peers to ensure high quality and reproducible results. - Mentor junior team members and continuously upskill them. - Maintain basic developer hygiene practices. **Qualifications, Experience, Skills:** - Academic degree in Computer Science, Computer Applications, or a related engineering discipline. - 5+ years of experience in developing scalable and high-quality ML models. - Strong problem-solving skills with a proactive approach to identifying and resolving bottlenecks. - Proficiency in tools like pandas, NumPy, SciPy, scikit-learn, TensorFlow for machine learning. - Good understanding of statistical computing, parallel processing, and memory management in Python. - Experience in code versioning, modularized code base maintenance, and working in an Agile environment. - Proficiency in Docker, Kubernetes, Portainer, and container orchestration for edge computing. - Exposure to real-time analytics, edge AI deployments, DevOps practices, and infrastructure automation. - Contributions to Open Source Software or Stack Overflow are a plus. - Passion for beer and a strong commitment to dreaming big. At AB InBev, the future is built on the foundation of innovation, collaboration, and a shared passion for creating memorable experiences. If you are driven by the desire to make a difference and have a thirst for pushing boundaries, then AB InBev is the place where your aspirations can take flight.,
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the AI/ML Data Platform team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Collaborate with business stakeholders, product teams, and technology teams to finalize software solutions aligned with strategic goals. Architect, design, and develop AI products for the core AI and Machine Learning team using generative AI, natural language processing, and other AI-ML technologies. Work alongside software developers and data scientists, and collaborate with product and development teams. Establish timelines for product features and communicate them to business stakeholders. Conduct data modeling for AI software solutions, determine data persistence strategies, and create data pipelines. Set coding standards for code repositories and perform code reviews. Oversee product deployments on public and private clouds, ensuring server costs are managed through monitoring and tuning Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Extensive hands-on experience in system design, application development, testing, operational stability, and Agile SDLC. Proficiency in Python, Java, and JavaScript. Skilled in technologies such as FastAPI, Spring, Agent Building tools, and LLMs. Expertise in automation and continuous delivery methods, with a strong understanding of agile methodologies like CI/CD, Application Resiliency, and Security. Demonstrated proficiency in software applications and technical processes within disciplines like cloud, AI, ML, and mobile. In-depth knowledge of the financial services industry and IT systems, with experience in microservice design patterns, data structures, algorithms, and cloud services such as AWS, Terraform and ability to work in a global setup and interact with clients Preferred Qualifications, Capabilities, And Skills Exposure to python libraries such as pandas, scipy and numpy Exposure to python concurrency through python multiprocessing would be advantageous. Exposure to grid computing concepts would be advantageous.
Posted 3 weeks ago
0.0 - 3.0 years
0 - 1 Lacs
Chepauk, Chennai, Tamil Nadu
On-site
Position Title: AI Specialist - Impact Based Forecasting Due to the operational nature of this role, preference will be given toapplicants who are currently based in Chennai, India and possess valid work authorization. RIMES is committed to diversity and equal opportunity in employment. Open Period: 11 July 2025 – 10 August 2025 Background: The Regional Integrated Multi-Hazard Early Warning System for Africa and Asia (RIMES) is an international and intergovernmental institution, owned and governed by its Member States, for the generation, application, and communication of multi-hazard early warning information. RIMES was formed in the aftermath of the 2004 Indian Ocean tsunami, as a collective response by countries in Africa and Asia to establish a regional early warning system within a multi-hazard framework, to strengthen preparedness and response to trans-boundary hazards. RIMES was formally established on 30 April 2009 and registered with the United Nations on 1 July 2009. It operates from its regional early warning center located at the Asian Institute of Technology (AIT) campus in Pathumthani, Thailand. Position Description: The AI Specialist – Impact-Based Forecasting design and implement AI-based solutions to support predictive analytics and intelligent decision support across sectors (e.g., climate services, disaster management /The AI Specialist will play a central role in building robust data pipelines, integrating multi-source datasets, and enabling real-time data-driven decision-making by stakeholders. The role involves drawing from and contribute to multi-disciplinary datasets and working closely with a multi-disciplinary team within RIMES for generating IBF DSS, developing contingency plans, automating monitoring systems, contributing to Post-Disaster Needs Assessments (PDNA), and applying AI/ML techniques for risk reduction. This position requires a strong understanding of meteorological, hydrological, vulnerability and exposure patterns and translates data into actionable insights for disaster preparedness and resilience planning. The position reports to the Meteorology and Disaster Risk Modeling Specialist and India Regional Program Adviser, overseeing the AI Specialist – Impact-Based Forecasting’s work or as assigned by RIMES’ institutional structure and in close coordination with the Systems Research and Development Specialist and Project Manager. Duty station: RIMES Project Office Chennai, India (or other locations as per project requirements). Type of Contract: Full-time Project-Based contract Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: Minimum of 3 years of experience in data engineering, analytics, or IT systems for disaster management, meteorology, or climate services. Experience in multi-stakeholder projects and facilitating capacity-building programs. Knowledge Skills and Abilities: Machine Learning Fundamentals: Deep understanding of various ML algorithms, including supervised, unsupervised, and reinforcement learning. This includes regression, classification, clustering, time series analysis, anomaly detection, etc. Deep Learning: Proficiency with deep learning architectures (e.g., CNNs, RNNs, LSTMs, Transformers) and frameworks (TensorFlow, PyTorch, Keras). Ability to design, train, and optimize complex neural networks. Strong programming skills in Python, with extensive libraries (NumPy, Pandas, SciPy, Scikit-learn, Matplotlib, Seaborn, GeoPandas). Familiarity with AI tools: such as PyTorch, TensorFlow, Keras, MLflow, etc. Data Visualization: Ability to create clear, compelling visualizations to communicate complex data and model outputs. Familiarity with early warning systems, disaster risk frameworks, and sector-specific IBF requirements is a strong plus. Proficiency in technical documentation and user training. Personal Qualities: Excellent interpersonal skills; team-orientated work style; pleasant personality. Strong desire to learn and undertake new challenges. Creative problem-solver; willing to work hard. Analytical thinker with problem-solving skills. Strong attention to detail and ability to work under pressure. Self-motivated, adaptable, and capable of working in multicultural and multidisciplinary environments. Strong communication skills and the ability to coordinate with stakeholders. Major Duties and Responsibilities: Impact Based Forecasting Collaborate with other members of the IT team, meteorologists, hydrologists, GIS specialists, and disaster risk management experts within RIMES to ensure the development of IBF DSS. Develop AI models (e.g., NLP, computer vision, reinforcement learning) Integrate models into applications and dashboards. Ensure model explainability and ethical compliance. Assist the RIMES Team in applying AI/ML models to forecast hazards and project likely impacts based on exposure and vulnerability indices. Work with forecasters and domain experts to automate the generation of impact-based products. Ensure data security, backup, and compliance with data governance and interoperability standards. Train national counterparts on the use and management of the AL, including analytics dashboards. Collaborate with GIS experts, hydromet agencies, and emergency response teams for integrated service delivery. Technical documentation on data architecture, models, and systems. Capacity Building and Stakeholder Engagement Facilitate training programs for team members and stakeholders, focusing on RIMES policies, regulations, and the use of forecasting tools. Develop and implement a self-training plan to enhance personal expertise, obtaining a trainer certificate as required. Prepare and implement training programs to enhance team capacity and submit training outcome reports. Reporting Prepare technical reports, progress updates, and outreach materials for stakeholders. Maintain comprehensive project documentation, including strategies, milestones, and outcomes. Capacity-building workshop materials and training reports. Other Responsibilities Utilize AI skills to assist in system implementation plans and decision support system (DSS) development. Utilize skills to assist in system implementation plans and decision support system (DSS) development. Assist in 24/7 operational readiness for client early warning systems such as SOCs, with backup support from RIMES Headquarters. Undertake additional tasks as assigned by the immediate supervisor or HR manager based on recommendations from RIMES technical team members and organizational needs. The above responsibilities are illustrative and not exhaustive. Undertake any other relevant tasks that could be needed from time to time. Contract Duration The contract will initially be for one year and may be extended based on the satisfactory completion of a 180-day probationary period and subsequent annual performance reviews. How to Apply: Interested candidates should send your application letter, resume, salary expectation and 2 references to rimeshra@rimes.int by midnight of 10 August 2025, Bangkok time. Please state “AI Specialist—Impact-Based Forecasting: Your Name “the Subject line of the email. Only short-listed applicants will be contacted. Ms. Dusadee Padungkul Head-Department of Operational Support Regional Integrated Multi-Hazard Early Warning System AIT Campus, 58 Moo 9 Paholyothin Rd., Klong 1, Klong Luang, Pathumthani 12120 Thailand. RIMES promotes diversity and inclusion in the workplace. Well-qualified applicants particularly women are encouraged to apply. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Schedule: Monday to Friday Ability to commute/relocate: Chepauk, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Kindly specify your salary expectation per month. Do you have any experience or interest in working with international or non-profit organizations? Please explain. Education: Bachelor's (Required) Experience: working with international organization: 1 year (Preferred) Data engineering: 3 years (Required) Data analytics: 3 years (Required) Disaster management: 3 years (Preferred) Language: English (Required) Location: Chepauk, Chennai, Tamil Nadu (Required)
Posted 3 weeks ago
6.0 years
3 - 8 Lacs
Gurgaon
On-site
DESCRIPTION We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life 6+ years of building machine learning models for retail application experience PhD, or Master's degree and 6+ years of applied research experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning Demonstrated expertise in computer vision and machine learning techniques. BASIC QUALIFICATIONS 3+ years of building machine learning models for business application experience PhD, or Master's degree and 6+ years of applied research experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Machine Learning Science
Posted 3 weeks ago
5.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Position: Senior Data Scientist Location: Thane West, Mumbai Experience: 5 to 12 Years CTC: Up to ₹22 LPA About the Role: We are looking for a passionate and highly skilled Senior Data Scientist / Machine Learning Specialist to join our dynamic team. This individual will play a key role in designing, building, and deploying innovative machine learning systems and contributing to product development initiatives across cloud and edge computing environments. Key Responsibilities: Analyze and solve complex problems by proposing, designing, and delivering scalable technical solutions. Study and transform data science prototypes into production-ready systems. Design and develop machine learning applications and systems. Research and implement suitable ML algorithms and tools. Select appropriate datasets and data representation techniques. Conduct ML tests, experiments, and statistical analysis to refine model performance. Train and retrain systems as needed. Extend and customize existing ML libraries and frameworks for specific use cases. Collaborate closely with cross-functional teams for product development initiatives, including work on distributed systems, fault-tolerant architecture, containerization, etc. Required Technical Skills: Strong programming skills (Python preferred). Solid understanding of Data Structures and Algorithms. Proficiency in R, TensorFlow, SciPy, and ML frameworks such as Keras, PyTorch, or scikit-learn. Experience with Generative AI (GenAI) implementation in at least one project. Working knowledge of LLM tools and techniques. Expertise in Cloud Services: AWS, GCP, or Azure. In-depth knowledge of machine learning concepts, statistics, probability, and applied mathematics. Familiarity with full-stack development concepts is a plus. Experience with one or more DBMS platforms. Excellent analytical and problem-solving skills. Strong communication skills and the ability to work well in a team. Who Should Apply: Candidates with notice periods of 15 days or less only (strict requirement). Local candidates preferred — must be located between Dadar and Vashi. Let’s Connect! Send your updated resume to: sayaji@expediteinformatics.com Call us at: +91 96655 66357
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This is a key position supporting client organization with strong Analytics and data science capabilities. There is significant revenue and future opportunities associated with this role. Job Description: Develop and maintain data tables (management, extraction, harmonizing etc.) using GCP/ SQL/ Snowflake etc. This involves designing, implementing, and writing optimized codes, maintaining complex SQL queries to extract, transform, and load (ETL) data from various tables/sources, and ensuring data integrity and accuracy throughout the data pipeline process. Create and manage data visualizations using Tableau/Power BI. This involves designing and developing interactive dashboards and reports, ensuring visualizations are user-friendly, insightful, and aligned with business requirements, and regularly updating and maintaining dashboards to reflect the latest data and insights. Generate insights and reports to support business decision-making. This includes analyzing data trends and patterns to provide actionable insights, preparing comprehensive reports that summarize key findings and recommendations, and presenting data-driven insights to stakeholders to inform strategic decisions. Handle ad-hoc data requests and provide timely solutions. This involves responding to urgent data requests from various departments, quickly gathering, analyzing, and delivering accurate data to meet immediate business needs, and ensuring ad-hoc solutions are scalable and reusable for future requests. Collaborate with stakeholders to understand and solve open-ended questions. This includes engaging with business users to identify their data needs and challenges, working closely with cross-functional teams to develop solutions for complex, open-ended problems, and translating business questions into analytical tasks to deliver meaningful results. Design and create wireframes and mockups for data visualization projects. This involves developing wireframes and mockups to plan and communicate visualization ideas, collaborating with stakeholders to refine and finalize visualization designs, and ensuring that wireframes and mockups align with user requirements and best practices. Communicate findings and insights effectively to both technical and non-technical audiences. This includes preparing clear and concise presentations to share insights with diverse audiences, tailoring communication styles to suit the technical proficiency of the audience, and using storytelling techniques to make data insights more engaging and understandable. Perform data manipulation and analysis using Python. This includes utilizing Python libraries such as Pandas, NumPy, and SciPy for data cleaning, transformation, and analysis, developing scripts and automation tools to streamline data processing tasks, and conducting statistical analysis to generate insights from large datasets. Implement basic machine learning models using Python. This involves developing and applying basic machine learning models to enhance data analysis, using libraries such as scikit-learn and TensorFlow for model development and evaluation, and interpreting and communicating the results of machine learning models to stakeholders. Automate data processes using Python. This includes creating automation scripts to streamline repetitive data tasks, implementing scheduling and monitoring of automated processes to ensure reliability, and continuously improving automation workflows to increase efficiency. Requirements: 3 to 5 years of experience in data analysis, reporting, and visualization. This includes a proven track record of working on data projects and delivering impactful results and experience in a similar role within a fast-paced environment. Proficiency in GCP/ SQL/ Snowflake/ Python for data manipulation. This includes strong knowledge of GCP/SQL/Snowflake services and tools, advanced SQL skills for complex query writing and optimization, and expertise in Python for data analysis and automation. Strong experience with Tableau/ Power BI/ Looker Studio for data visualization. This includes demonstrated ability to create compelling and informative dashboards, and familiarity with best practices in data visualization and user experience design. Excellent communication skills, with the ability to articulate complex information clearly. This includes strong written and verbal communication skills, and the ability to explain technical concepts to non-technical stakeholders. Proven ability to solve open-ended questions and handle ad-hoc requests. This includes creative problem-solving skills and a proactive approach to challenges, and flexibility to adapt to changing priorities and urgent requests. Strong problem-solving skills and attention to detail. This includes a keen eye for detail and accuracy in data analysis and reporting, and the ability to identify and resolve data quality issues. Experience in creating wireframes a nd mockups. This includes proficiency in design tools and effectively translating ideas into visual representations. Ability to work independently and as part of a team. This includes being self-motivated and able to manage multiple tasks simultaneously and having a collaborative mindset and willingness to support team members. Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 3 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Syngenta is one of the world’s leading agriculture innovation company (Part of Syngenta Group) dedicated to improving global food security by enabling millions of farmers to make better use of available resources. Through world class science and innovative crop solutions, our 60,000 people in over 100 countries are working to transform how crops are grown. We are committed to rescuing land from degradation, enhancing biodiversity and revitalizing rural communities. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. To learn more visit: www.syngenta.com Job Description Purpose Be integral part of the P&S Data Science team that applies technical expertise in data management, data science, machine learning, artificial intelligence, and automation to design, build, deploy, and maintain solutions across multiple countries. Work with project teams and SMEs to understand business requirements and develop appropriate data and AI/ML solutions. Contribute solutions with explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques to existing and new projects. Work independently to build applications for data collection and processing, exploration and visualization, analysis, regression, classification, and generation as required by the project and business teams. Accountabilities Develop and implement complex statistical models, machine learning algorithms, and data mining techniques to extract insights from large datasets. Lead data science projects from conception to completion, defining scope, methodology, and deliverables. Collaborate with business leaders to translate data insights into actionable strategies and recommendations. Guide and mentor junior data scientists, fostering their professional development and technical skills. Contribute to the design and improvement of data architecture, pipelines, and storage solutions. Stay current with the latest advancements in data science and introduce new techniques or technologies to the organization. Establish and maintain standards for data quality, documentation, and ethical use of data. Present complex findings to both technical and non-technical audiences, including executive leadership. Address complex business challenges using data-driven approaches and creative solutions. Improve the efficiency and scalability of data processing and model deployment. Develop deep knowledge in Syngenta’s P&S operations to better contextualize data insights. Own the design, build, and deploy process including collaboration with users and multi-disciplinary teams to fulfil the user, business and technical requirements. Qualifications Required Knowledge & Technical Skills Bachelor’s degree in Computer Science, Data Science, Engineering, Mathematics, or a related discipline (or equivalent practical experience). Understanding of ETL processes and data pipeline design. Required development skills: Data Science – TensorFlow Scikit-learn, SciPy, NumPy, Pandas, XGBoost, Keras, etc. Programming – Python with pytorch and tensorflow Data – SQL, EDA, Descriptive and Predictive analysis, visualization Preferred additional skills: Web services – RESTful APIs and API testing, JSON, etc. Front end – JavaScript, Flask, React, Node.js, Vue, Django, or other. Tools – Git, npm, pip, Heroku, or other tools. Knowledge and hands-on experience supervised and unsupervised ML using Logistic/multi-variate regression, Gradient Boosting, Decision Trees, Neural Network, Random Forest, Support Vector Machine, Naive Bayes, Time Series, Optimization, etc. Preference for proven experience in adapting algorithms to required models - Regression, Decision Trees, Random Forests, LLM. Preference for experience in production and supply domain: production planning, supply chain, logistics, track and trace, CRM, etc. Preference for experience in Deep Learning model development in agriculture, supply chain or related domains. Documentation of documentation of APIs, models, and operational manuals (Markdown, etc.). Must be able to work on end to end activities from design, development and deployment Required Experience Previous internship, placement, or project experience in data engineering, data science, software development, or a related field. Required 2 years experience of building data science projects using AI/ML models. Exposure to cloud platforms such as AWS, Azure, Google Cloud, or Data Bricks (preferred but not essential). Additional Information Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. Follow us on: Twitter & LinkedIn https://twitter.com/SyngentaAPAC https://www.linkedin.com/company/syngenta/ India page https://www.linkedin.com/company/70489427/admin/
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Size Large-scale / Global Experience Required 2 - 5 years Working Days 5 days/week Office Location Karnataka, Bengaluru Role & Responsibilities As an AI Data Application Specialist, you will be part of a growing AI Center of Excellence, working to integrate AI into data-driven decision-making across agencies. This role involves analyzing existing data architectures, developing AI-powered prototypes, and consulting on AI solutions—bridging the gap between technical execution and business impact. The focus is on strategic system design and AI consulting, rather than day-to-day coding. You will work collaboratively to develop AI-powered workflows, optimize data pipelines, and explore generative AI applications to enhance business intelligence. AI Strategy & Consulting for Data System Optimization: Advise on how AI can enhance data systems within different business and technology environments. Assess current data architectures and identify opportunities for AI integration. Design sustainable AI-driven data solutions, considering scalability, maintenance, and real-world business impact. Communicate the advantages and limitations of AI-powered data workflows to key stakeholders. AI-Powered Data Pipeline Design & Prototyping: Develop AI-supported data systems, integrating RESTful APIs, relational databases, and cloud storage solutions. Use Python, Numpy, Pandas, Scikit-learn, and Scipy to analyze data and build AI-enabled solutions. Design and implement machine learning models to solve key business challenges. Collaborate with cross-functional teams, utilizing version control and code review practices. Research & AI Innovation for Business Solutions: Stay informed on Generative AI, Large Language Models (LLMs), and multimodal AI developments. Explore new AI-driven business applications, providing recommendations on AI adoption strategies. Build AI prototypes to assess feasibility, efficiency, and long-term viability. Ideal Candidate 2+ years of experience in data science, machine learning, or applied statistics. Strong consulting and communication skills—ability to simplify complex AI concepts for non-technical audiences. Fluent in English (mandatory)—comfortable engaging with global teams and stakeholders. Proficiency in Python and data analysis tools (Pandas, Numpy, Scikit-learn, etc.). Experience working with data pipelines, APIs, and cloud platforms (AWS, GCP, or Azure). Familiarity with Generative AI tools and LLMs (e.g. ChatGPT, Gemini, Claude) and their integration into data systems. Preferred Skills Experience in technical consulting or AI solution design. Understanding of AI ethics, model governance, and scalable AI systems. Knowledge of data visualization tools (Power BI, Tableau, Looker) for AI-driven analytics. Why Join This Role? AI-Driven Impact – Play a critical role in integrating AI into real-world data solutions. Innovation-Focused – Work with cutting-edge AI technologies, including LLMs and multimodal AI. Global Exposure – Collaborate with international teams and contribute to enterprise-wide AI transformation. Strategic Influence – Be at the forefront of business process optimization using AI. Skills: python,learning,pandas,teams,scipy,generative ai tools,scikit-learn,numpy,data solutions,machine learning,data visualization tools (power bi, tableau, looker),restful apis,data,design,integration,large language models (llms),data systems,cloud platforms (aws, gcp, azure),data pipelines
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a talented Senior Research Data Scientist who expects more from their career You will be at the forefront of dunnhumby’s research data science team where you will be translating a complex business problem into a data science problem and solving that with scalable and state-of-the art AI algorithms. Joining our research data science team, you will be helping identify new opportunities within the Data Science space for future dunnhumby solutions. Learn from experts and grow your career in our organisation What We Expect From You Degree or equivalent in a statistical or mathematical subject. 5 to 7 Years of experience in Data Science Good understanding of statistics and mathematical methodologies used – especially forecasting, regression, linear models, time series, hypothesis tests and optimisation Ability to prototype solutions using Python and Spark to facilitate development and testing of algorithms on large data sets Good understanding of machine learning techniques, with applications to classification, prediction and clustering Ability to apply and extend experimental design methodology to conduct rigorous measurement of treatment effects Good working knowledge of databases, including SQL, relational and non-relational data models Experience of using following algorithm at scale: Ancova, linear models with regularization, clustering, random forests, xgboost Research the latest machine learning approaches Good grasp of Object-Oriented Programming Stakeholder Management Ability to quickly learn open-source statistics and machine learning packages – Pandas, SciPy, Scikit - learning, Tensor flow. Experience in developing solutions related to Market Mix Modelling, Causal AI and Generative AI What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)
Posted 3 weeks ago
5.0 years
7 - 8 Lacs
Hyderābād
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description 5+ years of experience with Linux, networking and security fundamentals. You should have experience with AWS cloud platform and infrastructure. 5+ years of experience working with infrastructure as code with Terraform or Ansible tools. Experience managing large BigData clusters in production (at least one of - Cloudera, Hortonworks, EMR). Excellent knowledge and solid work experience providing observability for BigData platforms using tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Expert knowledge on Hadoop Distributed File System (HDFS) and Hadoop YARN. Decent knowledge of various Hadoop file formats like ORC, Parquet, Avro etc. Experience with Hive (Tez), Hive LLAP, Presto and Spark compute engines. Understand query plans and optimize performance for complex SQL queries on Hive and Spark. Experience supporting Spark with Python (PySpark) and R (SparklyR, SparkR) languages Solid professional coding experience with at least one scripting language - Shell, Python etc. Experience working with Data Analysts, Data Scientists and at least one of these related analytical applications like SAS, R-Studio, JupyterHub, H2O etc. Read and understand code (Java, Python, R, Scala), but expertise in at least one scripting languages like Python or Shell. You will be reporting to a Senior Manager or a Director Preferred skills: Experience in building, deploying, and monitoring distributed apps using container systems (Docker) and container orchestration (Kubernetes, EKS) Experience with workflow management tools like Airflow, Oozie etc. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Implementation history of Packer, Chef, Jenkins or any other similar tooling. Prior work knowledge of Active Directory and Windows OS based VDI platforms like Citrix, AWS Workspaces etc. Qualifications Qualifications Bachelor of Engineering or Equivalent to bachelor's degree Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 weeks ago
2.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 3 weeks ago
0 years
5 - 11 Lacs
Thiruvananthapuram
On-site
Required Skills We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities 1. AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. 2. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. 3. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. 4. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. 5. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. 6. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) 7. Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. 8. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Technical Skills Proficient in Python , with strong knowledge of libraries like NumPy, Pandas, SciPy, and Matplotlib for data manipulation and visualization. Expertise in TensorFlow, PyTorch, Scikit-learn, and Keras for building, training, and optimizing machine learning and deep learning models. Hands-on experience with Transformer libraries like Hugging Face Transformers, OpenAI APIs, and LangChain for NLP tasks. Practical knowledge of CNN architectures (e.g., YOLO, ResNet, VGG) and Vision Transformers (ViT) for Computer Vision applications. Proficiency in developing and deploying Diffusion Models like Stable Diffusion, SDX, and other generative AI frameworks. Experience with RLHF (Reinforcement Learning with Human Feedback) and reinforcement learning algorithms for optimizing AI behaviors. Proficiency with Docker and Kubernetes for containerization and orchestration of AI workflows. Hands-on experience with MLOps tools such as MLFlow for model tracking and CI/CD integration in AI pipelines. Expertise in setting up monitoring tools like Prometheus and Grafana to track model performance, latency, throughput, and drift. Knowledge of performance optimization techniques, such as quantization, pruning, and knowledge distillation, to improve model efficiency. Experience in building data pipelines for preprocessing, cleaning, and transforming large datasets using tools like Apache Airflow, Luigi Familiarity with cloud-based storage systems (e.g., AWS S3, Google BigQuery) for efficient data handling in AI workflows. Strong understanding of cloud platforms (AWS, GCP, Azure) for deploying and scaling AI solutions. Knowledge of advanced search technologies such as Elasticsearch for indexing and querying large datasets. Familiarity with edge deployment frameworks and optimization for resource-constrained environments Qualifications · Bachelor's or Master's degree in Data Science, Statistics, Mathematics, Computer Science, or a related field. Experience: 2.5 to 5 yrs Location: Trivandrum Job Type: Full-time Pay: ₹500,000.00 - ₹1,100,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person
Posted 3 weeks ago
23.0 years
0 Lacs
India
Remote
What We Do: At ClearTrail, work is more than ‘just a job’. Our calling is to develop solutions that empower those dedicated to keep their people, places and communities safe. For over 23 years, law enforcement & federal agencies across the globe have trusted ClearTrail as their committed partner in safeguarding nations & enriching lives. We are envisioning the future of intelligence gathering by developing artificial intelligence and machine learning based lawful interception & communication analytics solutions that solve the world’s most challenging problems. Role Summary: Lead the development of advanced AI/computer vision capabilities for analyzing and fusing imagery from drones, satellites, and ground-based video feeds. Roles and responsibilities: Design and lead implementation of computer vision models for aerial (drone/satellite) and terrestrial video data. Architect data fusion pipelines across multiple visual and spatiotemporal modalities. Guide research into cross-resolution image fusion, change detection, and anomaly identification. Collaborate with platform, data, and MLOps teams to productionize models Skills Must Have: Expert in deep learning frameworks: PyTorch, TensorFlow, Keras Proficient with vision/model libraries: MMDetection, Detectron2, OpenCV Experience with multi-source data fusion, such as drone-satellite alignment, optical/LiDAR fusion, or time series modeling Good to Have : Familiarity with remote sensing data (e.g., Sentinel, Landsat, PlanetScope) Qualification: Education: B.E./B.Tech/MCA/M.Tech Joining Location: Indore Experience: 10-12 Years Job Types: Full-time, Permanent Schedule: Day shift Monday to Friday Application Question(s): As this is an onsite opportunity, are you okay to relocate to Indore? How many years of experience you have using NumPy, Pandas, SciPy, PyTorch and TensorFlow? Do you have experience with satellite/drone geospatial fusion? Work Location: In person Application Deadline: 01/08/2025 Expected Start Date: 04/08/2025
Posted 3 weeks ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 4000+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team first mindset. Headquartered in Silicon Valley, you’ll find our delivery centers across the globe and offices in multiple cities across India, the US, UK, Canada, and Singapore, including a substantial remote global workforce. We’re Great Place to Work-Certified™. Working at Tiger Analytics, you’ll be at the heart of an AI revolution. You’ll work with teams that push the boundaries of what is possible and build solutions that energize and inspire. Work Location: The base location is Delhi/NCR; however, you will be required to work regularly in Jaipur during the initial period. About the role: This pivotal role focuses on the end-to-end development, implementation, and ongoing monitoring of both application and behavioral scorecards within our dynamic retail banking division. While application scorecard development will be the primary area of focus and expertise required, you have a scope of contributing to behavioral scorecard initiatives. The primary emphasis will be on our unsecured lending portfolio, including personal loans, overdrafts, and particularly credit cards. You will be instrumental in enhancing credit risk management capabilities, optimizing lending decisions, and driving profitable growth by leveraging advanced analytical techniques and robust statistical models. This role requires a deep understanding of the credit lifecycle, regulatory requirements, and the ability to translate complex data insights into actionable business strategies within the Indian banking context. Key Responsibilities: End-to-End Scorecard Development (Application & Behavioral): Lead the design, development, and validation of new application scorecards and behavioral scorecards from scratch, specifically tailored for the Indian retail banking landscape and unsecured portfolios (personal loans, credit cards) across ETB and NTB Segments. Should have prior experience in this area. Utilize advanced statistical methodologies and machine learning techniques, leveraging Python for data manipulation, model building, and validation. Ensure robust model validation, back-testing, stress testing, and scenario analysis to ascertain model robustness, stability, and predictive power, adhering to RBI guidelines and internal governance. Cloud-Native Model Deployment & MLOps: Drive the deployment of developed scorecards into production environments on AWS, collaborating with engineering teams to integrate models into credit origination and decisioning systems. Implement and manage MLOps practices for continuous model monitoring, re-training, and version control within the AWS ecosystem. Data Strategy & Feature Engineering: Proactively identify, source, and analyze diverse datasets (e.g., internal bank data, credit bureau data like CIBIL, Experian, Equifax) to derive highly predictive features for scorecard development. Should have prior experience in this area. Address data quality challenges, ensuring data integrity and suitability for model inputs in an Indian banking context. Performance Monitoring & Optimization: Establish and maintain comprehensive model performance monitoring frameworks, including monthly/quarterly tracking of key performance indicators (KPIs) like Gini coefficient, KS statistic, and portfolio vintage analysis. Identify triggers for model recalibration or redevelopment based on performance degradation, regulatory changes, or evolving market dynamics. Required Qualifications, Capabilities and Skills: Experience: 3-10 years of hands-on experience in credit risk model development, with a strong focus on application scorecard development and significant exposure to behavioral scorecards, preferably within the Indian banking sector applying concepts including roll-rate analysis, swapset analysis, reject inferencing. Demonstrated prior experience in model development and deployment in AWS environments, understanding cloud-native MLOps principles. Proven track record in building and validating statistical models (e.g., logistic regression, GBDT, random forests) for credit risk. Education: Bachelor's or Master's degree in a quantitative discipline such as Mathematics, Statistics, Physics, Computer Science, Financial Engineering, or a related field Technical Skills: Exceptional hands-on expertise in Python (Pandas, NumPy, Scikit-learn, SciPy) for data manipulation, statistical modeling, and machine learning. Proficiency in SQL for data extraction and manipulation. Familiarity with AWS services relevant to data science and machine learning (e.g., S3, EC2, SageMaker, Lambda). Knowledge of SAS is a plus, but Python is the primary requirement. Analytical & Soft Skills: Deep understanding of the end-to-end lifecycle of application and behavioral scorecard development, from data sourcing to deployment and monitoring. Strong understanding of credit risk principles, the credit lifecycle, and regulatory frameworks pertinent to Indian banking (e.g., RBI guidelines on credit risk management, model risk management). Excellent analytical, problem-solving, and critical thinking skills. Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders.
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS group is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview of the role The Business research Analyst will be responsible for continuous improvement projects across the RBS teams leading to each of its delivery levers. The long-term goal of Research analyst (RA) role is to eliminate Defects and automate qualifying tasks. Secondary goals is to improve the vendor or customer experience, and to enhance GMS/ FCF. This will require collaboration with local and global teams, which have process and technical expertise. Therefore, RA should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. RA should works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enablers to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Leads projects and opportunities across the Operations (FCs, Sortation, logistic centres, Supply Chain, Transportation, Engineering ...) that are business critical, and may be global in nature. RA should perform Big data analysis to identify the defects patterns/process gaps and come up with long term solutions to eliminate the defects/issues. RA should Writes clear and detailed functional specifications based on business requirements as well as writes and reviews business cases. Key Responsibilities for this Role:- Scoping, driving and delivering complex projects across multiple teams. Performs root cause analysis by understand the data need, get data / pull the data and analyze it to form the hypothesis and validate it using data. Dive deep to drive product pilots, build and analyze large data sets, and construct problem hypotheses that help steer the product feature roadmap (e.g. with use of R,SAS, STATA, Matlab, Python or JAVA), tools for database (e.g. SQL, Redshift) and ML tools (Rapid Miner, Eider) Build programs to create a culture of continuous improvement within the business unit, and foster a customer-centric focus on the quality, productivity, and scalability of our services. Find the scalable solution for business problem by executing pilots and build Deterministic and ML model (plug and play on readymade ML models and python skills). Manages meetings, business and technical discussions regarding their part of the projects. Makes recommendations and decisions that impact development schedules and the success for a product or project. Drives team(s)/partners to meet program and/or product goals. Coordinates design effort between internal team and External team to develop optimal solutions for their part of project for Amazon’s network. Supports identification of down-stream problems (i.e. system incompatibility, resource unavailability) and escalate them to the appropriate level before they become project-threatening. Performs supporting research, conduct analysis of the bigger part of the projects and effectively interpret reports to identify opportunities, optimize processes, and implement changes within their part of project. Ability to convince and interact with stakeholders at all level either to gather data and information or to execute and implement according to the plan. Ability to deal with ambiguity and problem solver Build reports from established data warehouses and self-service reporting tools Communicate ideas effectively and with influence (both verbally and in writing), within and outside the team. Key Performance Areas Solve large and complex business problems by aligning multiple teams together. Data analytics and Data Sciences Machine learning Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Bachelor's degree 3+ year on Python or R experience 1+ year experience in financial/business analysis 2+ year on SQL 2+ year ML project experience 2+ year on use experience of data analysis packages (Numpy, Pandas, Scipy etc.) Preferred Qualifications Knowledge of data modeling and data pipeline design NLP and Text Processing Deep Learning Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ - F07 Job ID: A2968108
Posted 3 weeks ago
5.0 - 9.0 years
19 - 25 Lacs
Bengaluru
Work from Office
Job Title - Sales Excellence COE Advanced Modeling Manager- CF Management Level:ML7 Location:Open Must have skills: Machine learning algorithms, SQL, R, or Python, Advanced Excel, Data visualization tools such as Power Bi, Power Apps, Tableau, QlikView, Google Data Studio Good to have skills: Google Cloud Platform (GCP) and BigQuery , Working knowledge of Salesforce Einstein Analytics; Pyomo, SciPy, PuLP, Gurobi, CPLEX or similar Job Summary : Sales Excellence at Accenture. We empower our people to compete, win and grow. We develop everything they need to build and mature their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. You are: An analyst with a flair for turning data into business insights. You enjoy searching for patterns and connections in data to discover trends, root causes and solutions. While logical and objective in your approach, your true superpower is helping others to see how things can be done differently. You use AI modeling to test theories and identify ways to improve processes. Roles & Responsibilities: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Modeling Analysis team, you will generate business insights to help Accenture to improve its processes, boost sales and maximize profitability. You will build models and scorecards that help business leaders to understand trends and market drivers. You will work closely with operations teams to turn these insights into user-friendly solutions. Managing access to tools and monitoring their usage You will: Collect and process data from functions such as Sales, Marketing, and Finance Analyze data to develop business insights that support decision making. Communicate data insights in a clear and concise manner. Work closely with the COE team to develop industrialized solutions. Professional & Technical Skills: Bachelors degree or equivalent experience Excellent oral and written communication in English At least five years of experience doing data modeling and analysis work, which may include Machine learning algorithms SQL, R, or Python Advanced Excel Data visualization tools such as Power Bi, Power Apps, Tableau, QlikView, Google Data Studio Project management experience Strong business acumen and attention to detail Additional Information: Masters degree in Analytics or similar field Understanding of sales processes and systems Knowledge of Google Cloud Platform (GCP) and BigQuery Experience working in Sales, Marketing, Pricing, Finance, or related fields. Working knowledge of Salesforce Einstein Analytics Knowledge of Optimization techniques and packages such as Pyomo, SciPy, PuLP, Gurobi, CPLEX or similar Experience working with Power Apps About Our Company | AccentureQualification Experience: Minimum 5 year(s) of experience is required Educational Qualification: Masters degree in Analytics or similar field
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough