Jobs
Interviews

4894 Data Processing Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 12 Lacs

gobichettipalayam, mayiladuthurai, chennai

Work from Office

Experience managing complex projects in data annotation, AI/ML operations,Deep understanding of data labeling workflows, platforms (e.g., Labelbox, CVAT), and annotation tools,Experience across modalities like image, text, and speech, Required Candidate profile you’ll be the operational SPOC of our AI projects, process, and precision, leading teams that annotate text, images, audio, and video for cutting-edge machine learning systems

Posted 1 week ago

Apply

0.0 - 5.0 years

1 - 5 Lacs

mumbai, pune, mumbai (all areas)

Work from Office

Skilled in high-volume data entry with strong typing speed & 100% accuracy. Proficient in Excel & Google Sheets. Detail-oriented with experience in managing large datasets. Fast, reliable, and efficient in data processing & documentation tasks.

Posted 1 week ago

Apply

0.0 - 5.0 years

2 - 4 Lacs

chennai, coimbatore, bengaluru

Hybrid

PERMANENT WORK FROM HOME 2025 graduate can also apply An Urgent Requirement For graduates and under graduates for Data Entry Sal 10 to 35k take home Required Age 18 to 35 Years Full Time Easy Selection Process Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us. Now the requirement is on hold from Client side. Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us. Now the requirement is on hold from Client side.

Posted 1 week ago

Apply

0.0 - 5.0 years

2 - 4 Lacs

vijayawada, visakhapatnam, hyderabad

Hybrid

PERMANENT WORK FROM HOME 2025 graduate can also apply An Urgent Requirement For graduates and under graduates for Data Entry Sal 10 to 35k take home Required Age 18 to 35 Years Full Time Easy Selection Process Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us. Now the requirement is on hold from Client side. Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us. Now the requirement is on hold from Client side.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be joining Simform, a prominent digital engineering company that specializes in Cloud, Data, AI/ML, and Experience Engineering to develop seamless digital experiences and scalable products. With a strong engineering foundation and a unique co-engineering approach, Simform is dedicated to creating future-proof digital products for high-growth ISVs and tech-enabled enterprises in various industries. As a Generative AI Engineer based in Ahmedabad, your primary responsibility will involve the development and implementation of AI/ML models, algorithms, and solutions. This role will require you to analyze data, design experiments, and collaborate with cross-functional teams to deploy AI-driven applications and solutions effectively. To excel in this role, you should possess a robust background in Artificial Intelligence and Machine Learning, along with proficiency in programming languages like Python, R, or Java. Experience with Deep Learning frameworks such as TensorFlow or PyTorch is crucial, as well as knowledge of generative models and neural networks. Additionally, an understanding of data processing and visualization techniques will be advantageous. Ideally, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field. Strong problem-solving and analytical skills are essential for this role, along with the ability to work collaboratively in a team environment and communicate complex ideas effectively.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You should have a minimum of 6-8 years of experience in the field. An advanced degree (Masters or higher) in Computer Science, Engineering, or a closely related technical discipline is essential for this role. Your proficiency with Retrieval-Augmented Generation (RAG) techniques and experience working with vector databases such as Weaviate and PineCone will be crucial. Extensive Python programming skills are required, with a proven ability to implement complex solutions. You should have at least 1 year of practical experience in building applications based on Large Language Models (LLMs) using established third-party models like OpenAI and Anthropic. Additionally, you should be skilled in prompt engineering, with experience in crafting function calls and constructing conversational AI applications. Familiarity with advanced LLM techniques such as zero-shot and few-shot learning, as well as fine-tuning for specific use cases, will be beneficial. A strong background in using Generative AI libraries and tools like Langchain and HuggingFace is necessary. You should also have a solid grasp of version control practices, including Git and collaborative development workflows. Comprehensive experience in preparing, managing, and processing extensive datasets, both structured and unstructured, is a must. Proven experience in deploying, scaling, and monitoring machine learning models, particularly LLMs, in live environments is also required. Flexibility to accommodate occasional overlap with US Eastern Time for effective team collaboration is essential, along with professional proficiency in English. Desirable additional experience includes hands-on experience with open-source LLM frameworks such as Llama or Mistral, and prior work involving data in regulated sectors like healthcare or banking. You should be skilled in RESTful API design using Python-FastAPI and development, ensuring secure and scalable interfaces. Proficiency in JavaScript for both front-end and back-end, with a strong command of associated frameworks, is necessary. You should be knowledgeable in architecting scalable solutions using microservices and have experience with the generative AI landscape, specifically around RAG, vector embeddings, LLMs, and open-source AI toolkits. Familiarity with cloud-based infrastructure such as AWS, Azure, and GCP, and their data services is required. Competence in database design and optimization across SQL and NoSQL technologies is essential. Agile development experience, using tools like Git, Jenkins, and Azure DevOps, is preferred. Excellent team collaboration and communication skills, with an aptitude for working across technical and non-technical roles, are also necessary for this role. The three must-haves for this position are Python (4/5), Rest API (4/5), and Generative AI (3/5).,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

haryana

On-site

As a Data Analyst at our organization based in India, you will be joining a cutting-edge team where you will play a crucial role in analyzing complex bank, franchise, or function data to identify business issues and opportunities. Your contribution will involve providing high-quality analytical input to develop and implement innovative processes and problem resolution across the bank. This hands-on role will allow you to enhance your data analysis expertise and gain valuable experience in a dynamic business area, offering this opportunity at the Associate Vice President level. Your responsibilities will include supporting the delivery of high-quality business solutions by performing data extraction, storage, manipulation, processing, and analysis. You will also be involved in developing and executing standard queries to ensure data quality, identify data inconsistencies, and missing data. Additionally, you will be collecting, profiling, and mapping appropriate data for new or existing solutions, as well as ongoing data activities. Identifying and documenting data migration paths and processes, standardizing data naming, definitions, and modeling will also be part of your role. You will help interpret customer needs and translate them into functional or data requirements and process models. Supporting critical projects within the department will be an integral part of your day-to-day activities. To excel in this role, we are looking for candidates with experience in using data analysis tools and delivering data analysis in a technology or IT function. An in-depth understanding of data interrelationships across multiple data domains is essential. Your background should include delivering research based on qualitative and quantitative data across various subjects. Strong influencing and communication skills are crucial for this role. Additionally, you should have a good grasp of Data Profiling and the ability to drive data definitions and mappings. Experience in documenting feed specs, agreeing on API contracts with customers, and proficiency in technical skills such as SQL, Mongo DBI, and Post Man is required. Experience in the Banking domain is desired, and we are specifically seeking candidates with 12+ years of experience.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As an Executive Administrative Assistant at our organization, you will be responsible for providing secretarial and administrative support to the VP-IT of MKS IT Organization in India. Your role will involve assisting in the day-to-day management tasks in a fast-paced and growth-oriented environment. The ideal candidate for this position should be self-motivated, organized, detail-oriented, and able to prioritize work effectively. Your key responsibilities will include managing the VP-IT's calendar, scheduling meetings, conferences, and travel arrangements. You will be required to prepare professional reports, presentations, and briefs, as well as maintaining documentation and filing systems efficiently. Additionally, you will handle incoming calls and emails, manage calendars, and respond to requests and queries promptly and professionally. To excel in this role, you should have a Bachelor's or Master's degree with a minimum of 5 years of relevant experience in administrative or secretarial positions. Proficiency in Microsoft Office tools, knowledge of ERP systems like Oracle, SAP, and excellent communication skills are essential. You should also possess strong organizational, project management, and problem-solving skills, along with the ability to multitask and manage time effectively. In this position, you will be working in a professional office environment where you will be expected to perform activities such as sitting, standing, typing for extended periods, and operating office machinery. Your role will require good manual dexterity, coordination, and the ability to communicate information effectively. Confidentiality, interpersonal skills, and the ability to remain calm in high-stress situations are also crucial for this role. If you are looking for a challenging role that offers opportunities for growth and development, and if you meet the qualifications and possess the required skills, we encourage you to apply for the Executive Administrative Assistant position with us.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for developing and optimizing ETL pipelines using PySpark and Databricks. Your role will involve working with large-scale datasets and building distributed computing solutions. You will design and implement data ingestion, transformation, and processing workflows, as well as write efficient and scalable Python code for data processing. Collaboration with data engineers, data scientists, and business teams to deliver insights is a key aspect of this role. Additionally, you will be expected to optimize performance and cost efficiency for big data solutions and implement best practices for CI/CD, testing, and automation in a cloud environment. Monitoring job performance, troubleshooting failures, and tuning queries will also be part of your responsibilities.,

Posted 1 week ago

Apply

0.0 - 5.0 years

3 - 8 Lacs

kochi, vijayawada, visakhapatnam

Hybrid

Monitoring and reviewing databases and correcting errors. Generating and exporting data reports, spreadsheets, and documents as needed. Performing clerical duties such as filing, monitoring office supplies, scanning, and printing as needed. Required Candidate profile Good understanding of databases and digital and paper filing systems. Knowledge of administrative and clerical operations. Keen eye for detail and the ability to concentrate for extended periods. Perks and benefits Flexible Work. Work From Home Only.

Posted 1 week ago

Apply

0.0 - 5.0 years

3 - 8 Lacs

kolkata, surat, delhi / ncr

Hybrid

Monitoring and reviewing databases and correcting errors. Generating and exporting data reports, spreadsheets, and documents as needed. Performing clerical duties such as filing, monitoring office supplies, scanning, and printing as needed. Required Candidate profile Good understanding of databases and digital and paper filing systems. Knowledge of administrative and clerical operations. Keen eye for detail and the ability to concentrate for extended periods. Perks and benefits Flexible Work. Work From Home Only.

Posted 1 week ago

Apply

0.0 - 5.0 years

3 - 8 Lacs

pune, ahmedabad, mumbai (all areas)

Hybrid

Monitoring and reviewing databases and correcting errors. Generating and exporting data reports, spreadsheets, and documents as needed. Performing clerical duties such as filing, monitoring office supplies, scanning, and printing as needed. Required Candidate profile Good understanding of databases and digital and paper filing systems. Knowledge of administrative and clerical operations. Keen eye for detail and the ability to concentrate for extended periods. Perks and benefits Flexible Work. Work From Home Only.

Posted 1 week ago

Apply

0.0 - 5.0 years

3 - 8 Lacs

hyderabad, chennai, bengaluru

Hybrid

Monitoring and reviewing databases and correcting errors. Generating and exporting data reports, spreadsheets, and documents as needed. Performing clerical duties such as filing, monitoring office supplies, scanning, and printing as needed. Required Candidate profile Good understanding of databases and digital and paper filing systems. Knowledge of administrative and clerical operations. Keen eye for detail and the ability to concentrate for extended periods. Perks and benefits Flexible Work. Work From Home Only.

Posted 1 week ago

Apply

0.0 - 3.0 years

1 - 2 Lacs

mumbai

Work from Office

Design and Optimize Strategies: Implement, refine, and optimize trading strategies and ML models based on market data, order flow, and trade data from market quotes. Backtest these strategies using Python, R, and other relevant tools. Investment Idea Generation: Generate short, medium, and long-term investment ideas and strategies through rigorous quantitative research and data analysis. AI and ML in Finance: Apply cutting-edge AI and ML techniques in stock selection, portfolio construction, and strategy development. Data Processing and Analysis: Work with large datasets of equity and derivatives, analyzing market data to extract meaningful insights that can improve model performance and decision-making. Eligibility criteria: Bachelor s or master s degree in Computer Science, IT, Engineering. Proficiency in programming languages such as Python (other languages like C++, Java are a plus). Proficiency in programming languages such as Python and R (other languages like C++, Java are a plus). Excellent communication skills, to collaborate with other teams and clients. Strong analytical and problem-solving skills.

Posted 1 week ago

Apply

4.0 - 6.0 years

30 - 35 Lacs

bengaluru

Work from Office

Position Senior - Data Engineer Role Description Leads the delivery of solution or infrastructure development services for a large or more complex project, using advanced technical capabilities Takes accountability for the design, development, delivery and maintenance of solutions or infrastructure, driving compliance with and contributing to the development of relevant standards Fully understands business and user requirements and ensures design specifications meet the requirements from a business and technical perspective Design and development of Data Ingestion, Transformation components to support business requirements. Should be able to process heterogenous source data formats (excels, csv, txt, pdf, database, web) and be able to perform EDA (Exploratory Data Analysis), handle outliers, data cleansing and data transformation. Should be highly data driven and able to write complex data transformation programs using PySpark, Databricks, Python. Experience in data integration and data processing using Spark and Python. Has hands-on experience in providing performing tuning techniques to resolve the performance issues in the data pipelines. Provides advanced technical expertise to maximize efficiency, reliability and value from current solutions, infrastructure and emerging technologies, showing technical leadership and identifying and implementing continuous improvement plans Works closely with Development lead and build components in agile methodology Develops strong working relationships with peers across Development & Engineering and Architecture teams, collaborating to develop and engineer leading solutions Skills requirement 4 6 years of hands on experience in Big Data Technologies and distributed systems computing Strong experience in Azure Databricks / Python / Spark / PySpark. Strong experience with SQL, RESTful API, JSON Experience with Azure Cloud resources is preferable Experience with Angular would be nice to have Exposure to any noSQL Databases (MongoDB, Cosmos DB and etc) is a plus

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 9 Lacs

bengaluru

Work from Office

Company Overview A.P. Moller - Maersk is an integrated container logistics company and member of the A.P. Moller Group. Connecting and simplifying trade to help our customers grow and thrive . With a dedicated team of over 95,000 employees, operating in 130 countries; we go all the way to enable global trade for a growing world . From the farm to your refrigerator, or the factory to your wardrobe, A.P. Moller - Maersk is developing solutions that meet customer needs from one end of the supply chain to the other. About the Team At Maersk, the Global Ocean Manifest team is at the heart of global trade compliance and automation. We build intelligent, high-scale systems that seamlessly integrate customs regulations across 100+ countries, ensuring smooth cross-border movement of cargo by ocean, rail, and other transport modes. Our mission is to digitally transform customs documentation, reducing friction, optimizing workflows, and automating compliance for a complex web of regulatory bodies, ports, and customs authorities. We deal with real-time data ingestion, document generation, regulatory rule engines, and multi-format data exchange while ensuring resilience and security at scale. Key Responsibilities Work with large, complex datasets and ensure efficient data processing and transformation. Collaborate with cross-functional teams to gather and understand data requirements. Ensure data quality, integrity, and security across all processes. Implement data validation, lineage, and governance strategies to ensure data accuracy and reliability. Build, optimize , and maintain ETL pipelines for structured and unstructured data , ensuring high throughput, low latency, and cost efficiency . Experience in building scalable, distributed data pipelines for processing real-time and historical data. Contribute to the architecture and design of data systems and solutions. Write and optimize SQL queries for data extraction, transformation, and loading (ETL). Advisory to Product Owners to identify and manage risks, debt, issues and opportunities for the technical improvement . Providing continuous improvement suggestions in internal code frameworks, best practices and guidelines . Contribute to engineering innovations that fuel Maersk s vision and mission. Required Skills & Qualifications 4 + years of experience in data engineering or a related field. Strong problem-solving and analytical skills. E xperience on Java, Spring framework Experience in building data processing pipelines using Apache Flink and Spark. Experience in distributed data lake environments ( Dremio , Databricks, Google BigQuery , etc.) E xperience on Apache Kafka, Kafka Streams Experience working with databases. PostgreSQL preferred, with s olid experience in writin g and opti mizing SQL queries. Hands-on experience in cloud environments such as Azure Cloud (preferred), AWS, Google Cloud, etc. Experience with data warehousing and ETL processes . Experience in designing and integrating data APIs (REST/ GraphQL ) for real-time and batch processing. Knowledge on Great Expectations, Apache Atlas, or DataHub , would be a plus Knowledge on RBAC, encryption, GDPR compliance would be a plus Business skills Excellent communication and collaboration skills Ability to translate between technical language and business language, and communicate to different target groups Ability to understand complex design Possessing the ability to balance and find competing forces & opinions , within the development team Personal profile Fact based and result oriented Ability to work independently and guide the team Excellent verbal and written communication

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

hyderabad

Work from Office

Overview In charge of developing, deploying, and maintaining BI & Analytics interfaces. This role offers many exciting challenges and opportunities: Emphasis is on managing end-to-end delivery of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems Participate in design of analysis with managers and modelers This is a role that requires exceptional programming skills, proven technical leadership, ability to collaborate cross-functionally with business stakeholders, product managers, designers, and other technology leaders. We are looking for BI engineers with a passion for researching new solutions and ways of working, who enjoy sharing insights with their peers. Responsibilities Emphasis is on end-to-end delivery of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems Accountable for extract data, design, develop, deploy and support interactive Data Visualization dashboards using BI tools Extremely comfortable working with data, including managing large number of data sources, analyzing data quality, and pro-actively working with client s data/ IT teams to resolve issues Reformulate highly technical information into concise, understandable terms for presentations Handling project related administration and work allocation within the team Maintaining the Project documentation and generating the reports on Project status Problem solving attitude with strong expertise to investigate data trends Qualifications 5+ years of experience in Analytics, Reporting, Marketing Analytics, Project Management, Data Processing, VBA Automation, SQL & Access database Strong interpersonal skills to effectively interact with client/onshore personnel to understand specific Enterprise Information Management requirement and develop solutions based on those requirement Required skills: Proficient in SQL, Python, Statistical analyses, Data visualization ( Power BI, Tableau), Risk Management, Trend Analysis Master s or Bachelors degree in math, statistics, economics, computer engineering or related analytics field from a premier/Tier1 institute of India Good understanding about tableau server connection and workbook import and download. Good understanding about DBT (Data Build Tool), model creation and .sql and .yml files. Strong Foundation in all versions of Tableau and Power BI . Good Understanding and working knowledge of Tableau & Power BI Experience in Digital Commerce, supply chain domain is preferred Very strong analytical skills with the demonstrated ability to research and make decisions based on the day-to-day and complex customer problems required Ability to handle big data sets, data mining from different databases Data warehousing, Centralizing or aggregating data from multiple sources Outstanding written and verbal communication skills 5+ years of experience in Analytics, Reporting, Marketing Analytics, Project Management, Data Processing, VBA Automation, SQL & Access database Strong interpersonal skills to effectively interact with client/onshore personnel to understand specific Enterprise Information Management requirement and develop solutions based on those requirement Required skills: Proficient in SQL, Python, Statistical analyses, Data visualization ( Power BI, Tableau), Risk Management, Trend Analysis Master s or Bachelors degree in math, statistics, economics, computer engineering or related analytics field from a premier/Tier1 institute of India Good understanding about tableau server connection and workbook import and download. Good understanding about DBT (Data Build Tool), model creation and .sql and .yml files. Strong Foundation in all versions of Tableau and Power BI . Good Understanding and working knowledge of Tableau & Power BI Experience in Digital Commerce, supply chain domain is preferred Very strong analytical skills with the demonstrated ability to research and make decisions based on the day-to-day and complex customer problems required Ability to handle big data sets, data mining from different databases Data warehousing, Centralizing or aggregating data from multiple sources Outstanding written and verbal communication skills Emphasis is on end-to-end delivery of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems Accountable for extract data, design, develop, deploy and support interactive Data Visualization dashboards using BI tools Extremely comfortable working with data, including managing large number of data sources, analyzing data quality, and pro-actively working with client s data/ IT teams to resolve issues Reformulate highly technical information into concise, understandable terms for presentations Handling project related administration and work allocation within the team Maintaining the Project documentation and generating the reports on Project status Problem solving attitude with strong expertise to investigate data trends

Posted 1 week ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

bengaluru

Work from Office

We are seeking a skilled Senior Data Scientist (Senior AI Data Scientist ) to join our Digital Finance-Disruptive Tec -Global Financial services Division We are seeking a highly experienced Senior AI Data Scientist to utilize AI in the development of architecture and tools in support of a cutting-edge AI-driven platform for financial data harmonization and monitoring. With over 10 years of experience , the ideal candidate will be a thought leader in applying Artificial Intelligence (AI) and Large Language Models (LLM) to solve complex data challenges within the finance domain. This role is pivotal in transforming our data landscape by creating intelligent systems for multi-source data connectivity , automated data manipulation , and ensuring the highest standards of data quality . You will be instrumental in building a solution to streamline data flow from various ERP systems to our Master Data Management and reporting platforms. The preference for this role is to be based out of Bangalore, Whitefield Office What you will do Strategic AI Leadership: Utilize your knowledge in the development of an AI-powered data harmonization platform to automate the mapping and consolidation of financial data from multiple ERP systems. Advanced Model Development: Design and implement sophisticated AI/ML models, including the use of LLMs and Natural Language Processing (NLP), to interpret, standardize, and enrich financial data. Data Connectivity and Integration: Develop AI-driven solutions to establish and maintain robust data connectivity between disparate financial systems, ensuring seamless data flow and integrity. Data Quality and Anomaly Detection: Implement intelligent data quality monitoring and anomaly detection systems to proactively identify and resolve data integrity issues in our financial data pipelines. Financial Domain Application : Apply expertise in financial data and processes to build AI models that understand and automate complex data manipulation and allocation rules. Cross-Functional Collaboration: Work closely with finance, IT, and business stakeholders to define requirements, present solutions, and drive the adoption of AI-powered data management practices. Mentorship and Innovation: Mentor junior data scientists and engineers while staying at the forefront of AI, LLM, and machine learning advancements to drive continuous innovation. What you will have Educational Background: A master s or Ph.D. in Computer Science, Data Science, Artificial Intelligence, or a related quantitative field is required. Extensive Experience : A minimum of 10+ years of hands-on experience in data science, with a significant focus on building and deploying AI and machine learning models in a corporate environment. AI and Machine Learning Expertise: Demonstrated mastery of machine learning techniques (e.g., classification, regression, clustering) and advanced analytics. Proven experience in developing and implementing deep learning models, particularly with NLP, LLMs, and Generative AI for applications like data summarization and anomaly detection. Hands-on experience with deep learning toolkits such as PyTorch, TensorFlow, Transformers, and HuggingFace. Programming and Database Proficiency: Advanced programming skills in Python and proficiency with ML/DS libraries like pandas, NumPy, and scikit-learn. Expertise in SQL and experience with large-scale data processing frameworks and databases (e.g., Teradata, Snowflake). Financial Acumen: A strong understanding of financial concepts, financial data systems (ERPs), and data analysis within a corporate finance context. Experience in financial modelling, forecasting, and risk management is highly desirable. Data Architecture and Integration: Proven experience in designing and building data models and robust data pipelines, especially in a multi-system environment. Familiarity with data harmonization, master data management principles, and ETL/ELT processes. Analytical and Strategic Thinking: Exceptional problem-solving skills with the ability to tackle ambiguous and complex challenges in the financial data domain. A strategic mindset to translate business needs into innovative AI-driven solutions and communicate complex technical concepts to both technical and non-technical stakeholders. Preferred Qualifications, Capabilities, and Skills: Experience with cloud platforms (e.g., AWS, GCP, Azure) and their AI/ML services. Familiarity with reporting and business intelligence platforms such as OneStream. A track record of leading technical projects and mentoring data science teams. Experience in building ML models in a cloud environment and with big data This position requires candidate to work a 5-day -a -week schedule in the office Top Candidates Will Have: Bachelor s degree or master s degree in information systems, STEM, Finance or similar related discipline Progressive Project Management experience, including ability to strategize, implement and maintain project initiative Knowledge of product lifecycle, tools, methods, and techniques of requirement analysis Collaboration experience with Data Product, Architecture, Data Science, and Engineering teams in both USA and Offshore environment Intermediate understanding of SQL concepts and professional experience with database management (SQL Server and Snowflake) and data reporting tools (PowerBI, Tableau etc.) Proven experience on business processes, systems and data across key financial domains including Procure-to-Pay, Record-to-Report, Invoice-to-Cash, Order-to-Cash, and Financial Planning & Analysis Experience in Blackline tool implementation is a plus Skills desired: Business Statistics: Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: Explains the basic decision process associated with specific statistics. Works with basic statistical functions on a spreadsheet or a calculator. Explains reasons for common statistical errors, misinterpretations, and misrepresentations. Describes characteristics of sample size, normal distributions, and standard deviation. Generates and interprets basic statistical data. Accuracy and Attention to Detail: Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Extensive Experience: Evaluates and makes contributions to best practices. Processes large quantities of detailed information with high levels of accuracy. Productively balances speed and accuracy. Employs techniques for motivating personnel to meet or exceed accuracy goals. Implements a variety of cross-checking approaches and mechanisms. Demonstrates expertise in quality assurance tools, techniques, and standards. Analytical Thinking : Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: Approaches a situation or problem by defining the problem or issue and determining its significance. Makes a systematic comparison of two or more alternative solutions. Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. Identifies the major forces, events and people impacting and impacted by the situation at hand. Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. Machine Learning : Knowledge of principles, technologies and algorithms of machine learning; ability to develop, implement and deliver related systems, products and services. Level Working Knowledge: Completes specific tasks and initiatives utilizing machine learning technologies, such as search engine optimization. Utilizes specific tools and techniques to process descriptive and inferential statistics. Applies specific computing languages and tools in machine learning, such as R and Python. Explores to use machine learning in one own areas to make business improvements. Conducts data mining and cleaning initiatives. Programming Languages: Knowledge of basic concepts and capabilities of programming; ability to use tools, techniques and platforms in order to write and modify programming languages. Level Working Knowledge: Participates in the implementation and support of specialized programming languages. Conducts basic reviews on writing a specific programming language within a specific platform. Assists with the design and development of specialized programming languages. Follows an organizations standards, policies and guidelines for structured programming specifications. Diagnoses and reports minor or routine programming language problems.

Posted 1 week ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

pune, bengaluru

Work from Office

In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What Ill be doing your accountabilities? Build and maintain machine learning models for various applications, such as natural language processing, computer vision, and recommendation systems Perform exploratory data analysis (EDA) to identify patterns and trends in data Clean, preprocess, perform hyperparameter tuning and analyze large datasets to prepare them for AI/ML model training Build, test, and optimize machine learning models and experiment with algorithms and frameworks to improve model performance Use programming languages, machine learning frameworks and libraries, algorithms, data structures, statistics and databases to optimize and fine-tune machine learning models to ensure scalability and efficiency Learn to define user requirements and align solutions with business needs Work on AI/ML engineering projects, perform feature engineering and collaborate with teams to understand business problems Learn best practices in data / AI/ML engineering and performance optimization Contribute to research papers and technical documentation Contribute to project documentation and maintain data quality standards Foundational Skills Understands Programming skills beyond the fundamentals and can demonstrate this skill in most situations without guidance. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Proficiency in Python programming. Proficiency in Python-based statistical analysis and data visualization tool While having limited understanding of Technical Documentation but are focused on growing this skill Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 1+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 3+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Some experience with scenario simulations. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/ ) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker)

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

pune, bengaluru

Work from Office

In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What Ill be doing your accountabilities? Design, develop, and implement robust, scalable, and optimized machine learning and deep learning models, with the ability to iterate with speed Write and integrate automated tests alongside models or code to ensure reproducibility, scalability, and alignment with established quality standards Implement best practices in security, pipeline automation, and error handling using programming and data manipulation tools Identify and implement the right data-driven approaches to solve ambiguous and open-ended business problems, leveraging data engineering capabilities Research and implement new models, technologies, and methodologies and integrate these into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative tools, develop algorithms and optimized workflows Independently manage and optimize data solutions, perform A/B testing, evaluate performance and evaluate performance of systems Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Have mastered the concepts and can demonstrate Programming skills in complex scenarios. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Understands the basic fundamentals of Technical Documentation and can demonstrate in common scenarios with some guidance Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 5+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 4+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Strong foundation with expertise in neural networks, optimization techniques and model evaluation Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/ ) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker)

Posted 1 week ago

Apply

10.0 - 15.0 years

7 - 11 Lacs

bengaluru

Work from Office

We are seeking a highly skilled and experienced Senior Full Stack Developer to join our dynamic development team. As a Senior Developer, you will play a crucial role in designing, developing, and maintaining our software applications using Java and Kotlin. The ideal candidate should have a strong background in event-driven architecture, along with extensive experience in working with Java, React, Kafka and Kubernetes. If you are passionate about building scalable, high-performance systems and thrive in a collaborative environment, we would love to hear from you. Responsibilities: Collaborate with cross-functional teams to gather requirements, design software solutions, and implement robust and scalable applications using Java and React. Develop and maintain event-driven architectures, ensuring the seamless flow of data and communication between various components. Design and implement efficient data processing pipelines using Kafka, ensuring fault tolerance and high throughput. Write clean, maintainable, and efficient code while adhering to coding standards and best practices. Optimize software performance and troubleshoot any issues or bottlenecks that arise during development or production. Collaborate with DevOps teams to deploy and manage applications in a Kubernetes environment, ensuring scalability and availability. Conduct thorough testing and debugging of applications to ensure quality and reliability. Mentor and provide guidance to junior developers, assisting them in their professional growth and technical skill development. Stay up to date with the latest trends and advancements in Java, React, event-driven architecture, Kafka, and Kubernetes, and apply them to enhance our development processes and systems. Qualifications and Skills: Bachelors degree in Computer Science, Software Engineering, or a related field (or equivalent experience). Strong proficiency in JavaScript, HTML, CSS, and related frontend technologies. Proven experience as a Java Developer/ React Developer, with a minimum of 10 years of professional experience. Proficiency in Java and ReactJS programming languages, with a deep understanding of object-oriented programming principles. Solid understanding of distributed systems, microservices architecture, and RESTful APIs. Solid understanding of modern frontend development tools and workflows (e.g., Babel, Webpack, NPM, Git). Experience with frontend testing frameworks (e.g., Jest, Enzyme) and test-driven development practices. Experience with containerization technologies like Docker and orchestration frameworks like Kubernetes. Knowledge of cloud platforms, preferably AWS or Azure, and their services (e.g., EC2, S3, Lambda, etc.). Strong analytical and problem-solving skills, with the ability to quickly diagnose and resolve issues. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Experience with Agile/Scrum methodologies and tools (e.g., JIRA, Confluence) is a plus. Continuous learning mindset with a passion for keeping up with the latest technologies and industry trends.

Posted 1 week ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

hyderabad

Work from Office

Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability.

Posted 1 week ago

Apply

9.0 - 14.0 years

17 - 18 Lacs

hyderabad

Work from Office

We are seeking a highly skilled and proactive AI Solutions SRE Lead to oversee the maintenance, optimization, and ongoing performance of deployed AI/ML systems and solutions. In this role, youll act as the bridge between innovation and operations, ensuring our AI solutions consistently deliver value and operate seamlessly in real-world environments. You will lead efforts to monitor deployments, troubleshoot issues, and define best practices for sustaining AI systems throughout their lifecycle. Responsibilities Monitoring & Sustenance: Lead the post-deployment lifecycle of AI solutions, ensuring continued functionality, reliability, and scalability. Establish monitoring frameworks to oversee system performance, usage, and metrics for AI/ML models and APIs. Detect anomalies in AI systems, troubleshoot operational issues, and initiate timely corrective actions. Performance Optimization: Continuously assess and optimize the performance of AI models to maintain efficiency and accuracy in production environments. Collaborate with data scientists and engineers to refine algorithms, retrain models, and update solutions as needed. Implement automation where possible to streamline maintenance processes. Stakeholder Collaboration: Work with cross-functional teams (engineering, product, operations, etc.) to ensure alignment of AI sustainment activities with business goals. Communicate effectively with stakeholders to provide updates on system health, risks, and improvements. Governance & Best Practices: Define and implement best practices for sustaining AI solutions, including documentation, testing protocols, and version control. Ensure compliance with ethical AI standards, regulatory guidelines, and established governance frameworks. Manage and mitigate risks associated with model drift, data shifts, and system vulnerabilities. Incident Management: Lead responses to critical incidents involving AI systems by performing root cause analysis and deploying solutions for quick resolution. Advocate for proactive risk prevention and early detection strategies. Mentor and develop junior team members, fostering their skills in AI observability and domain-specific knowledge in ML, Computer Vision, and Generative AI. Qualifications Required: Bachelors degree in Computer Science, Engineering, Data Science, or related field; advanced degree preferred. 9+ years of experience in machine learning, data science, or software engineering roles, with significant exposure to Computer Vision and Generative AI projects. 4+ years of experience specifically focused on AI/ML development and sustain the applications / solutions. Strong programming skills in languages such as Python, Java, or Go. Extensive experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn) and cloud platforms (e.g., AWS, Azure, GCP). Proficiency in data visualization tools and techniques (e.g., Grafana, Tableau, D3.js). Deep understanding of AI/ML concepts, including model training, evaluation, and deployment, with specific knowledge of Computer Vision and Generative AI techniques. Experience with monitoring and observability tools such as Prometheus, ELK stack, or similar systems. Excellent problem-solving skills and ability to troubleshoot complex AI systems across various domains. Proven track record of mentoring and developing junior team members in AI-related roles. Preferred: Experience with MLOps practices and tools, particularly for large-scale AI systems. Familiarity with AI ethics and responsible AI principles, especially as they relate to Generative AI. Knowledge of relevant AI regulations and compliance requirements, including those specific to Computer Vision applications. Experience with distributed systems and large-scale data processing for AI applications. Contributions to open-source projects or research publications in AI solution at production scale. Previous experience with large-scale AI/ML solutions in production environments. Knowledge of DevOps principles and CI/CD pipelines specific to AI/ML systems. Key Competencies Strong analytical and critical thinking skills Excellent communication and collaboration abilities Proactive and self-motivated work ethic Ability to explain complex technical concepts to both technical and non-technical audiences Adaptability and willingness to learn in a rapidly evolving field Strong mentorship and leadership skills Deep curiosity and passion for AI, particularly in ML, Computer Vision, and Generative AI domains We are looking for a passionate and innovative individual who can help us build robust, transparent, and reliable AI systems while nurturing the growth of our team. If you have a strong background in AI/ML, with specific expertise in Computer Vision and Generative AI, and a keen interest in observability and system reliability, we encourage you to apply. Required: Bachelors degree in Computer Science, Engineering, Data Science, or related field; advanced degree preferred. 9+ years of experience in machine learning, data science, or software engineering roles, with significant exposure to Computer Vision and Generative AI projects. 4+ years of experience specifically focused on AI/ML development and sustain the applications / solutions. Strong programming skills in languages such as Python, Java, or Go. Extensive experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn) and cloud platforms (e.g., AWS, Azure, GCP). Proficiency in data visualization tools and techniques (e.g., Grafana, Tableau, D3.js). Deep understanding of AI/ML concepts, including model training, evaluation, and deployment, with specific knowledge of Computer Vision and Generative AI techniques. Experience with monitoring and observability tools such as Prometheus, ELK stack, or similar systems. Excellent problem-solving skills and ability to troubleshoot complex AI systems across various domains. Proven track record of mentoring and developing junior team members in AI-related roles. Preferred: Experience with MLOps practices and tools, particularly for large-scale AI systems. Familiarity with AI ethics and responsible AI principles, especially as they relate to Generative AI. Knowledge of relevant AI regulations and compliance requirements, including those specific to Computer Vision applications. Experience with distributed systems and large-scale data processing for AI applications. Contributions to open-source projects or research publications in AI solution at production scale. Previous experience with large-scale AI/ML solutions in production environments. Knowledge of DevOps principles and CI/CD pipelines specific to AI/ML systems. Key Competencies Strong analytical and critical thinking skills Excellent communication and collaboration abilities Proactive and self-motivated work ethic Ability to explain complex technical concepts to both technical and non-technical audiences Adaptability and willingness to learn in a rapidly evolving field Strong mentorship and leadership skills Deep curiosity and passion for AI, particularly in ML, Computer Vision, and Generative AI domains We are looking for a passionate and innovative individual who can help us build robust, transparent, and reliable AI systems while nurturing the growth of our team. If you have a strong background in AI/ML, with specific expertise in Computer Vision and Generative AI, and a keen interest in observability and system reliability, we encourage you to apply. Monitoring & Sustenance: Lead the post-deployment lifecycle of AI solutions, ensuring continued functionality, reliability, and scalability. Establish monitoring frameworks to oversee system performance, usage, and metrics for AI/ML models and APIs. Detect anomalies in AI systems, troubleshoot operational issues, and initiate timely corrective actions. Performance Optimization: Continuously assess and optimize the performance of AI models to maintain efficiency and accuracy in production environments. Collaborate with data scientists and engineers to refine algorithms, retrain models, and update solutions as needed. Implement automation where possible to streamline maintenance processes. Stakeholder Collaboration: Work with cross-functional teams (engineering, product, operations, etc.) to ensure alignment of AI sustainment activities with business goals. Communicate effectively with stakeholders to provide updates on system health, risks, and improvements. Governance & Best Practices: Define and implement best practices for sustaining AI solutions, including documentation, testing protocols, and version control. Ensure compliance with ethical AI standards, regulatory guidelines, and established governance frameworks. Manage and mitigate risks associated with model drift, data shifts, and system vulnerabilities. Incident Management: Lead responses to critical incidents involving AI systems by performing root cause analysis and deploying solutions for quick resolution. Advocate for proactive risk prevention and early detection strategies. Mentor and develop junior team members, fostering their skills in AI observability and domain-specific knowledge in ML, Computer Vision, and Generative AI.

Posted 1 week ago

Apply

5.0 - 10.0 years

13 - 17 Lacs

bengaluru

Work from Office

We are seeking a skilled Data Scientist (Vector DB Engineer Data Scientist) for Applications Development & Intelligence Automation -CAT IT Division. The preference for this role is to be based out of Whitefield PSN Office, Bangalore What you will do Job Roles and Responsibilities As a Vector DB Engineer, you will be responsible for designing, implementing, and optimizing vector databases that enable high-performance, large-scale data processing and retrieval. You will work closely with our data science, machine learning, and software engineering teams to build robust solutions that support our clients data-intensive applications. Roles & Responsibilities: Design, implement, and manage vector databases to support large-scale data storage and retrieval, ensuring low latency and high availability. Develop efficient data models that facilitate fast vector operations such as similarity search, nearest neighbor search, and other vector-based queries. Optimize database performance through indexing, partitioning, sharding, and other techniques to handle large-scale datasets. Integrate vector databases with existing systems and applications, ensuring seamless data flow and accessibility. Design and implement solutions that scale with growing data volumes, ensuring the database infrastructure can handle increased load and complexity. Implement security best practices to protect data at rest and in transit, including encryption, access controls, and audit logging. Monitor database performance and troubleshoot issues as they arise, ensuring system reliability and availability. Work closely with data scientists, machine learning engineers, and software developers to understand their needs and provide database solutions that meet their requirements. Maintain comprehensive documentation for database schemas, configurations, and procedures to support operational excellence and knowledge sharing. What you will have Deep understanding and hands-on experience with vector databases, including their architecture, query languages, and optimization techniques. Strong programming skills in languages such as Python, C++, or Java, with experience in developing and optimizing database operations. Solid understanding of data structures, algorithms, and computational geometry, particularly related to vector search and similarity measures Experience with cloud platforms (e.g., AWS, GCP, Azure) and managed database services. Understanding of machine learning concepts, particularly those related to embedding vectors and similarity searches. Strong problem-solving skills with a focus on performance optimization and scalability. Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders. A 5-year full-time education is required. This position requires candidate to work a 5-day -a -week schedule in the office Shift Timing- 01:00PM -10:00PM IST Other Preferred Skills: Knowledge of general accounting practices, Passion for Financial Reporting, Ability to learn and adapt quickly and a strong positive attitude. Maintaining stable performance under demanding business needs and support to the Business to the urgency. Skills desired: Business Statistics: Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: Explains the basic decision process associated with specific statistics. Works with basic statistical functions on a spreadsheet or a calculator. Explains reasons for common statistical errors, misinterpretations, and misrepresentations. Describes characteristics of sample size, normal distributions, and standard deviation. Generates and interprets basic statistical data. Accuracy and Attention to Detail: Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Extensive Experience: Evaluates and makes contributions to best practices. Processes large quantities of detailed information with high levels of accuracy. Productively balances speed and accuracy. Employs techniques for motivating personnel to meet or exceed accuracy goals. Implements a variety of cross-checking approaches and mechanisms. Demonstrates expertise in quality assurance tools, techniques, and standards. Analytical Thinking: Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: Approaches a situation or problem by defining the problem or issue and determining its significance. Makes a systematic comparison of two or more alternative solutions. Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. Identifies the major forces, events and people impacting and impacted by the situation at hand. Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. Machine Learning: Knowledge of principles, technologies and algorithms of machine learning; ability to develop, implement and deliver related systems, products and services. Level Working Knowledge: Completes specific tasks and initiatives utilizing machine learning technologies, such as search engine optimization. Utilizes specific tools and techniques to process descriptive and inferential statistics. Applies specific computing languages and tools in machine learning, such as R and Python. Explores to use machine learning in one own areas to make business improvements. Conducts data mining and cleaning initiatives. Programming Languages: Knowledge of basic concepts and capabilities of programming; ability to use tools, techniques and platforms in order to write and modify programming languages. Level Working Knowledge: Participates in the implementation and support of specialized programming languages. Conducts basic reviews on writing a specific programming language within a specific platform. Assists with the design and development of specialized programming languages. Follows an organizations standards, policies and guidelines for structured programming specifications. Diagnoses and reports minor or routine programming language problems. Query and Database Access Tools: Knowledge of data management systems; ability to use, support and access facilities for searching, extracting and formatting data for further use. Level Working Knowledge: Defines, creates and tests simple queries by using associated command language in a specific environment. Applies appropriate query tools used to connect to the data warehouse. Obtains and analyzes query access path information and query results. Employs tested query statements to retrieve, insert, update and delete information. Works with advanced features and functions including sorting, filtering and making simple calculations. Requirements Analysis: Knowledge of tools, methods, and techniques of requirement analysis; ability to elicit, analyze and record required business functionality and non-functionality requirements to ensure the success of a system or software development project. Level Working Knowledge: Follows policies, practices and standards for determining functional and informational requirements. Confirms deliverables associated with requirements analysis. Communicates with customers and users to elicit and gather client requirements. Participates in the preparation of detailed documentation and requirements. Utilizes specific organizational methods, tools and techniques for requirements analysis. What you will get: Work Life Harmony Earned and medical leave. Relocation assistance Holistic Development Personal and professional development through Caterpillar s employee resource groups across the globe Career developments opportunities with global prospects Health and Wellness Medical coverage -Medical, life and personal accident coverage Employee mental wellness assistance program Financial Wellness Employee investment plan Pay for performance -Annual incentive Bonus plan.

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 8 Lacs

chennai, bengaluru

Work from Office

This job will design, develop, and implement machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences. Job Description Essential Responsibilities Develop and optimize machine learning models for various applications. Preprocess and analyze large datasets to extract meaningful insights. Deploy ML solutions into production environments using appropriate tools and frameworks. Collaborate with cross-functional teams to integrate ML models into products and services. Monitor and evaluate the performance of deployed models. Minimum Qualifications Minimum of 5 years of relevant work experience and a Bachelors degree or equivalent experience. Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn. Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment. Several years of experience in designing, implementing, and deploying machine learning models. Preferred Qualification Strong proficiency in Python for data analysis, machine learning, and automation. Solid understanding of supervised and unsupervised AI/machine learning methods (e.g., XGBoost, LightGBM, Random Forest, clustering, isolation forests, autoencoders, neural networks, transformer-based architectures). Experience in payment fraud, AML, KYC, or broader risk modeling within fintech or financial institutions. Experience developing and deploying ML models in production using frameworks such as scikit-learn, TensorFlow, PyTorch, or similar. Hands-on experience with LLMs (e.g., OpenAI, LLaMA, Claude, Mistral), including use of prompt engineering, retrieval-augmented generation (RAG), and agentic AI to support internal automation and risk workflows. Ability to work cross-functionally with engineering, product, compliance, and operations teams. Proven track record of translating complex ML insights into business actions or policy decisions.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies