Jobs
Interviews

56 Vectorization Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Design and develop deep learning models for computer vision tasks such as: Object detection (e.g., YOLO, Faster R-CNN) Image classification (e.g., ResNet, EfficientNet) Semantic/instance segmentation (e.g., U-Net, Mask R-CNN) Video frame analysis, tracking, or scene understanding Use OpenCV for image/video preprocessing and classical vision tasks: Filtering, contour detection, edge detection, motion tracking, image transformations Prepare and manage large-scale image/video datasets — frame extraction, annotation support, augmentation Evaluate models using appropriate metrics (IoU, mAP, F1, precision-recall curves) Fine-tune or build from scratch using frameworks like PyTorch or TensorFlow Collaborate with cross-functional teams to translate requirements into deployable models Write clean, modular, and well-documented Python code for training and experimentation Must-Have Skills 4–6 years of experience in applied deep learning roles with focus on computer vision Proficiency in Python with strong skills in data structures, vectorization, and modular coding Hands-on experience with OpenCV for traditional image/video processing tasks Strong understanding of CNN architectures and common vision model types Deep learning experience with PyTorch or TensorFlow Practical experience with vision datasets (e.g., COCO, Pascal VOC, custom video/image data) Familiarity with model training, loss functions, optimization techniques, and performance tuning Nice-to-Have Skills Experience with video analytics or multi-object tracking Familiarity with Albumentations, imgaug, or other augmentation libraries Exposure to transformer-based vision models (e.g., ViT, Swin) Basic understanding of explainability techniques for vision models (e.g., Grad-CAM) Experience working with edge devices, embedded systems, or real-time CV (optional) Mandatory Skill Sets Python, Open CV, Deep learning, GenAI ,machine learning, data science Preferred Skill Sets Data analysis, SQL, MLOPS Years of experience Required 5+ Education Qualification BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline, Data Preprocessing, Data Quality {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be joining Ziance Technologies as an experienced Data Engineer (Gen AI) where your primary role will involve leveraging your expertise in Python and the Azure Tech Stack. Your responsibilities will include designing and implementing advanced data solutions, with a special focus on Generative AI concepts. With 5 - 8 years of experience under your belt, you must possess a strong proficiency in Python programming language. Additionally, you should have hands-on experience with REST APIs, Fast APIs, Graph APIs, and SQL Alchemy. Your expertise in Azure Services such as DataLake, Azure SQL, Function App, and Azure Cognitive Search will be crucial for this role. A good understanding of concepts like Chunking, Embeddings, vectorization, indexing, Prompting, Hallucinations, and RAG will be beneficial. Experience in DevOps, including creating pull PRs and maintaining code repositories, is a must-have skill. Your strong communication skills and ability to collaborate effectively with team members will be essential for success in this position. If you are looking to work in a dynamic environment where you can apply your skills in Azure, Python, and data stack, this role at Ziance Technologies could be the perfect fit for you.,

Posted 4 days ago

Apply

0 years

0 Lacs

India

On-site

#Data Scientist #Data analysis #Retrieval-Augmented Generation (RAG) #Dataanalysis #EDA # #NumPy #scikit-learn #pandas #NLP #NLR #FAISS #AWS #BERT #Python #scikit Job Overview: • Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. • Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. • Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. • Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. • Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. • Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. • Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: • Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. • Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). • Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. • Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). • Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. • Sound understanding of supervised learning, recommendation systems, and classification algorithms. • Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.

Posted 4 days ago

Apply

1.0 - 2.0 years

2 Lacs

Bharatpur

On-site

✅ Location: Bharatpur (On-Site) ✅ Job Type: Full-Time ✅ Experience: 1–2 Years ✅ Salary: ₹25,000 – ₹35,000/month (based on skills & experience) ✅Job Description: We are looking for a passionate and skilled Python Developer with experience in Machine Learning (ML) and Large Language Models (LLMs) to join our on-site team in Bharatpur . You'll be responsible for developing intelligent tools and solutions using open-source ML/NLP frameworks. ✅Key Responsibilities: Build and deploy ML/LLM-based solutions using Python Work on NLP tasks such as text classification, summarization, and chatbot development Integrate and fine-tune pre-trained LLMs (Hugging Face, OpenAI, etc.) Use libraries like Transformers, LangChain, scikit-learn, or TensorFlow Handle data collection, cleaning, vectorization, and embeddings Create APIs using Flask or FastAPI for ML model deployment Collaborate with the product and engineering teams for real-world use cases ✅ Required Skills: 1–2 years of hands-on experience in Python and ML/NLP Knowledge of LLM frameworks like Hugging Face, LangChain, or OpenAI Experience with vector databases like FAISS or Pinecone (preferred) Familiarity with Transformers, embeddings, tokenization Understanding of REST APIs and integration Good problem-solving and debugging skills ✅Good to Have: Experience with chatbot frameworks or RAG pipelines Exposure to tools like Gradio, Streamlit for UI prototyping Version control using Git Docker/basic deployment skills ✅What We Offer: Competitive salary with performance-based growth Opportunity to work on innovative AI projects locally Learning-focused culture and skill development Supportive and collaborative work environment 5-day work week (Mon–Fri) If you're excited about AI, Python, and real-world ML applications — apply now and join our team in Bharatpur! Job Types: Full-time, Permanent Pay: From ₹20,347.73 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Life insurance Paid sick time Provident Fund Location: Bharatpur, Rajasthan (Required) Work Location: In person

Posted 5 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bharatpur, Rajasthan

On-site

✅ Location: Bharatpur (On-Site) ✅ Job Type: Full-Time ✅ Experience: 1–2 Years ✅ Salary: ₹25,000 – ₹35,000/month (based on skills & experience) ✅Job Description: We are looking for a passionate and skilled Python Developer with experience in Machine Learning (ML) and Large Language Models (LLMs) to join our on-site team in Bharatpur . You'll be responsible for developing intelligent tools and solutions using open-source ML/NLP frameworks. ✅Key Responsibilities: Build and deploy ML/LLM-based solutions using Python Work on NLP tasks such as text classification, summarization, and chatbot development Integrate and fine-tune pre-trained LLMs (Hugging Face, OpenAI, etc.) Use libraries like Transformers, LangChain, scikit-learn, or TensorFlow Handle data collection, cleaning, vectorization, and embeddings Create APIs using Flask or FastAPI for ML model deployment Collaborate with the product and engineering teams for real-world use cases ✅ Required Skills: 1–2 years of hands-on experience in Python and ML/NLP Knowledge of LLM frameworks like Hugging Face, LangChain, or OpenAI Experience with vector databases like FAISS or Pinecone (preferred) Familiarity with Transformers, embeddings, tokenization Understanding of REST APIs and integration Good problem-solving and debugging skills ✅Good to Have: Experience with chatbot frameworks or RAG pipelines Exposure to tools like Gradio, Streamlit for UI prototyping Version control using Git Docker/basic deployment skills ✅What We Offer: Competitive salary with performance-based growth Opportunity to work on innovative AI projects locally Learning-focused culture and skill development Supportive and collaborative work environment 5-day work week (Mon–Fri) If you're excited about AI, Python, and real-world ML applications — apply now and join our team in Bharatpur! Job Types: Full-time, Permanent Pay: From ₹20,347.73 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Life insurance Paid sick time Provident Fund Location: Bharatpur, Rajasthan (Required) Work Location: In person

Posted 5 days ago

Apply

4.0 years

0 - 8 Lacs

Noida

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · • Design and develop deep learning models for computer vision tasks such as: · • Object detection (e.g., YOLO, Faster R-CNN) · • Image classification (e.g., ResNet, EfficientNet) · • Semantic/instance segmentation (e.g., U-Net, Mask R-CNN) · • Video frame analysis, tracking, or scene understanding · • Use OpenCV for image/video preprocessing and classical vision tasks: · • Filtering, contour detection, edge detection, motion tracking, image transformations · • Prepare and manage large-scale image/video datasets — frame extraction, annotation support, augmentation · • Evaluate models using appropriate metrics (IoU, mAP, F1, precision-recall curves) · • Fine-tune or build from scratch using frameworks like PyTorch or TensorFlow · • Collaborate with cross-functional teams to translate requirements into deployable models · • Write clean, modular, and well-documented Python code for training and experimentation · · Must-Have Skills · • 4–6 years of experience in applied deep learning roles with focus on computer vision · • Proficiency in Python with strong skills in data structures, vectorization, and modular coding · • Hands-on experience with OpenCV for traditional image/video processing tasks · • Strong understanding of CNN architectures and common vision model types · • Deep learning experience with PyTorch or TensorFlow · • Practical experience with vision datasets (e.g., COCO, Pascal VOC, custom video/image data) · • Familiarity with model training, loss functions, optimization techniques, and performance tuning · · Nice-to-Have Skills · • Experience with video analytics or multi-object tracking · • Familiarity with Albumentations, imgaug, or other augmentation libraries · • Exposure to transformer-based vision models (e.g., ViT, Swin) · • Basic understanding of explainability techniques for vision models (e.g., Grad-CAM) · • Experience working with edge devices, embedded systems, or real-time CV (optional) Mandatory skill sets: Python, Open CV, Deep learning, GenAI ,machine learning, data science Preferred skill sets: Data analysis, SQL, MLOPS Years of experience required: 5+ Education qualification: BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline, Data Preprocessing, Data Quality {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · • Design and develop deep learning models for computer vision tasks such as: · • Object detection (e.g., YOLO, Faster R-CNN) · • Image classification (e.g., ResNet, EfficientNet) · • Semantic/instance segmentation (e.g., U-Net, Mask R-CNN) · • Video frame analysis, tracking, or scene understanding · • Use OpenCV for image/video preprocessing and classical vision tasks: · • Filtering, contour detection, edge detection, motion tracking, image transformations · • Prepare and manage large-scale image/video datasets — frame extraction, annotation support, augmentation · • Evaluate models using appropriate metrics (IoU, mAP, F1, precision-recall curves) · • Fine-tune or build from scratch using frameworks like PyTorch or TensorFlow · • Collaborate with cross-functional teams to translate requirements into deployable models · • Write clean, modular, and well-documented Python code for training and experimentation · · Must-Have Skills · • 4–6 years of experience in applied deep learning roles with focus on computer vision · • Proficiency in Python with strong skills in data structures, vectorization, and modular coding · • Hands-on experience with OpenCV for traditional image/video processing tasks · • Strong understanding of CNN architectures and common vision model types · • Deep learning experience with PyTorch or TensorFlow · • Practical experience with vision datasets (e.g., COCO, Pascal VOC, custom video/image data) · • Familiarity with model training, loss functions, optimization techniques, and performance tuning · · Nice-to-Have Skills · • Experience with video analytics or multi-object tracking · • Familiarity with Albumentations, imgaug, or other augmentation libraries · • Exposure to transformer-based vision models (e.g., ViT, Swin) · • Basic understanding of explainability techniques for vision models (e.g., Grad-CAM) · • Experience working with edge devices, embedded systems, or real-time CV (optional) Mandatory skill sets: · Python, Open CV, Deep learning, GenAI ,machine learning, data science Preferred skill sets: · Data analysis, SQL, MLOPS Years of experience required: 5+ Education qualification: BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline, Data Preprocessing, Data Quality {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

7.5 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure OpenAI Service Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using Azure AI Services, Microsoft Azure OpenAI Service and Azure PaaS components. Your typical day will involve collaborating with cross-functional teams, managing project timelines, and ensuring the successful delivery of high-quality applications. Roles & Responsibilities: - Lead the design, development, and deployment of applications using Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search), Azure Language Services (CLU), Microsoft Bot Framework/ Microsoft Power Virtual Agents (Microsoft Copilot Studio) and Azure PaaS. - Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure project timelines are met. - Ensure the successful delivery of high-quality applications, adhering to best practices and industry standards. - Provide technical guidance and mentorship to team members, fostering a culture of continuous learning and growth. - Stay updated with the latest advancements in Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search) and Azure Language Services (CLU) integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills: Strong experience with Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search), Azure Language Services (CLU) and Azure PaaS including API Management, Key Vault and Application Insights. - Must To Have Skills: At least 1+ year hands on experience in Azure based LLMs with knowledge in embeddings/vectorization and semantic/vector based indexing and search. Strong experience in LLM frameworks such as Langchain and Semantic Kernel. -Must to Have Skills: Experience with conversational AI with Microsoft Bot Framework or Power Virtual Agents (Microsoft Copilot Studio). Should have experience in integrating chatbot solutions with multiple channels. - Must To Have Skills: Strong understanding of software development best practices including Agile methodologies and industry standards with special focus on API, function app and logic app building. Experience with Python programming language, using Visual Studio Code IDE and Azure AI Studio. - Good To Have Skills: Knowledge of other vector databases, Knowledge Graphs and other programming languages such as C#. Experience with different LLMs such as Llama, Mistral etc. - Good To Have Skills: Experience with deployment of applications and containerization technologies such as Docker or Kubernetes. Good To Have Skills: Experience with all Azure AI services like Video Indexer, Computer Vision, Custom Vision, Document Intelligence. - Experience with database technologies such as SQL Server and Cosmos DB. - Solid grasp of software testing and quality assurance principles. - Knowledge of DevOps practices. Additional Information: - The candidate should have a minimum of years of experience in software development. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. - This position is based at our Bengaluru office.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Sr Information Security Engineer to be a part of their team based in Bangalore, Karnataka (IN-KA), India. As a Sr Information Security Engineer, you will bring your expertise and experience in IT Technology, AI / ML, data engineering, python coding, and enterprise-wide integration programs to contribute to our innovative and forward-thinking organization. **Required Skills:** - You should have at least 5 years of experience in IT Technology. - With a minimum of 2 years of experience in AI / ML, you should possess a strong working knowledge in neural networks. - Having 2+ years of data engineering experience, preferably using AWS Glue, Cribl, SignalFx, OpenTelemetry, or AWS Lambda, will be an added advantage. - Your proficiency should include 2+ years of python coding using numpy, vectorization, and Tensorflow. - You must have 2+ years of experience in leading complex enterprise-wide integration programs and efforts as an individual contributor. **Preferred Skills:** - A degree in Mathematics or Physics would be preferred. - Technical knowledge in cloud technologies such as AWS, Azure, and GCP for at least 2 years is desirable. - Excellent verbal, written, and interpersonal communication skills are a must. - Your ability to provide strong customer service will be highly valued. If you are someone who is passionate about innovation and growth, and can contribute effectively to our diverse and global team, we encourage you to apply now and be a part of NTT DATA's commitment to helping clients innovate, optimize, and transform for long-term success. About NTT DATA: NTT DATA is a $30 billion trusted global innovator of business and technology services. With a strong presence in more than 50 countries, we serve 75% of the Fortune Global 100 and have a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. Being one of the leading providers of digital and AI infrastructure globally, we are dedicated to helping organizations and society move confidently and sustainably into the digital future. Join us in this journey towards innovation and success! #LI-INPAS,

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

ONNX implementation and optimization in AIX: Strong application developer with deep knowledge of compiler behaviour when implementing numerically intensive AI algorithms. Understanding how to vectorize and optimize and communicate the benefits and behaviour of the optimized code. Requires knowledge of algorithms used in mathematical modelling, simulation, machine learning, and particularly ONNX. Requires demonstrated experience implementing these algorithms in applications that require robustness and performance. The job will require an understanding of analysing performance and data handling issues such as efficient handling of endianness formats to achieve the best possible performance. This is to be accomplished using new algorithms, advanced processor features and leveraging the features through advanced compiler optimization features and libraries. Candidate will have broad awareness of how to implement algorithms to deliver performance gain and consistency of the applications requirement. Required skills Development experience with the numeric algorithms used in mathematical modelling, simulation, machine learning, and particularly ONNX Experience with C and C++ application programming using one or more of these compilers: GCC, XL C, ICC, CLANG/LLVM, AOCC Experience applying numeric algorithms into complex multi threaded multiprocessing applications in UNIX or Linux OS Experience debugging runtime and runtime issues in large scale projects Familiarity with Python based coding Familiarity Java Development Kit(JDK) and Java Virtual Machine (JVM) Preferred skills Open-source contributions, system programming, networking (distributed/parallel applications) Application performance optimization investigation & analysis using tools like valgrind, perf, Nectar, PMU, pipestat, nmon

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role-AI/ML Engineer Mandatory Skills - Python, ML libraries (Pytorch, Tensorflow), Gen AI, Kubernetes, NLP Good to have skill- MLOps Location: Hyderabad Only Work Type-Work from Office (5 Days in a week) Experience-4 to 8 Yrs. Skills Required - Strong programming skills in Python, Java, Spring Boot, or Scala. Experience with ML frameworks like TensorFlow, PyTorch, XGBoost, TensorFlow or LightGBM. Familiarity with information retrieval techniques (BM25, vector search, learning-to-rank). Knowledge of embedding models, user/item vectorization, or session-based personalization. Experience with large-scale distributed systems (e.g., Spark, Kafka, Kubernetes). Hands-on experience with real-time ML systems. Background in NLP, graph neural networks, or sequence modeling. Experience with A/B testing frameworks and metrics like NDCG, MAP, or CTR.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us: Planful is the pioneer of financial performance management cloud software. The Planful platform, which helps businesses drive peak financial performance, is used around the globe to streamline business-wide planning, budgeting, consolidations, reporting, and analytics. Planful empowers finance, accounting, and business users to plan confidently, close faster, and report accurately. More than 1,500 customers, including Bose, Boston Red Sox, Five Guys, Grafton Plc, Gousto, Specialized and Zappos rely on Planful to accelerate cycle times, increase productivity, and improve accuracy. Planful is a private company backed by Vector Capital, a leading global private equity firm. Learn more at planful.com. About the Role: We are looking for self-driven, self-motivated, and passionate technical experts who would love to join us in solving the hardest problems in the EPM space. If you are capable of diving deep into our tech stack to glean through memory allocations, floating point calculations, and data indexing (in addition to many others), come join us. Requirements: 5+ years in a mid-level Python Engineer role, preferably in analytics or fintech. Expert in Python (Flask, Django, pandas, NumPy, SciPy, scikit-learn) with hands-on performance tuning. Familiarity with AI-assisted development tools and IDEs (Cursor, Windsurf) and modern editor integrations (VS Code + Cline). Exposure to libraries supporting time-series forecasting. Proficient in SQL for complex queries on large datasets. Excellent analytical thinking, problem-solving, and communication skills. Nice to have: Shape financial time-series data: outlier detection/handling, missing-value imputation, techniques for small/limited datasets. Profile & optimize Python code (vectorization, multiprocessing, cProfile). Monitor model performance and iterate to improve accuracy. Collaborate with data scientists and stakeholders to integrate solutions. Why Planful Planful exists to enrich the world by helping our customers and our people achieve peak performance. To foster the best in class work we're so proud of, we've created a best in class culture, including: 2 Volunteer days, Birthday PTO, and quarterly company Wellness Days 3 months supply of diapers and meal deliveries for the first month of your Maternity/Paternity leave Annual Planful Palooza, our in-person, company-wide culture Company-wide Mentorship program with Executive sponsorship of CFO and Manager-specific monthly training programs Employee Resource Groups such as Women of Planful, LatinX at Planful, Parents of Planful, and many We encourage our teammates to bring their authentic selves to the team, and have full support in creating new ERGs & communities along the way.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Planful is the pioneer of financial performance management cloud software. The Planful platform, which helps businesses drive peak financial performance, is used around the globe to streamline business-wide planning, budgeting, consolidations, reporting, and analytics. Planful empowers finance, accounting, and business users to plan confidently, close faster, and report accurately. More than 1,500 customers, including Bose, Boston Red Sox, Five Guys, Grafton Plc, Gousto, Specialized and Zappos rely on Planful to accelerate cycle times, increase productivity, and improve accuracy. Planful is a private company backed by Vector Capital, a leading global private equity firm. Learn more at planful.com. About The Role We are looking for self-driven, self-motivated, and passionate technical experts who would love to join us in solving the hardest problems in the EPM space. If you are capable of diving deep into our tech stack to glean through memory allocations, floating point calculations, and data indexing (in addition to many others), come join us. Requirements 5+ years in a mid-level Python Engineer role, preferably in analytics or fintech. Expert in Python (Flask, Django, pandas, NumPy, SciPy, scikit-learn) with hands-on performance tuning. Familiarity with AI-assisted development tools and IDEs (Cursor, Windsurf) and modern editor integrations (VS Code + Cline). Exposure to libraries supporting time-series forecasting. Proficient in SQL for complex queries on large datasets. Excellent analytical thinking, problem-solving, and communication skills. Nice To Have Shape financial time-series data: outlier detection/handling, missing-value imputation, techniques for small/limited datasets. Profile & optimize Python code (vectorization, multiprocessing, cProfile). Monitor model performance and iterate to improve accuracy. Collaborate with data scientists and stakeholders to integrate solutions. Why Planful Planful Exists To Enrich The World By Helping Our Customers And Our People Achieve Peak Performance. To Foster The Best In Class Work We’re So Proud Of, We’ve Created a Best In Class Culture, Including 2 Volunteer days, Birthday PTO, and quarterly company Wellness Days 3 months supply of diapers and meal deliveries for the first month of your Maternity/Paternity leave Annual Planful Palooza, our in-person, company-wide culture Company-wide Mentorship program with Executive sponsorship of CFO and Manager-specific monthly training programs Employee Resource Groups such as Women of Planful, LatinX at Planful, Parents of Planful, and many We encourage our teammates to bring their authentic selves to the team, and have full support in creating new ERGs & communities along the way.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Company - AppTestify Work Location - Remote Experience - 4+ years Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.

Posted 2 weeks ago

Apply

5.0 years

8 - 30 Lacs

Viman Nagar, Pune, Maharashtra

On-site

Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus. Experience – 5+ years Job Type: Full-time Pay: ₹800,000.38 - ₹3,000,000.60 per year Benefits: Health insurance Paid time off Schedule: Day shift Location: Viman Nagar, Pune, Maharashtra (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Sr Information Security Engineer to be a part of their team in Bangalore, Karnataka (IN-KA), India. As a Sr Information Security Engineer, you will be responsible for various tasks, including but not limited to: - Having 5+ years of experience in IT Technology. - Possessing 2+ years of experience in AI / ML with a strong working knowledge in neural networks. - Demonstrating 2+ years of data engineering experience, preferably using AWS Glue, Cribl, SignalFx, OpenTelemetry, or AWS Lambda. - Having 2+ years of python coding experience using numpy, vectorization, and Tensorflow. - Leading complex enterprise-wide integration programs and efforts for a minimum of 2 years as an individual contributor. Preferred Skills: - Holding a Mathematics or physics degree. - Having 2+ years of technical knowledge in cloud technologies such as AWS, Azure, GCP. - Demonstrating excellent verbal, written, and interpersonal communication skills. - Being able to provide strong customer service. NTT DATA is a trusted global innovator of business and technology services, with a commitment to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is one of the leading providers of digital and AI infrastructure globally. If you are an exceptional, innovative, and passionate individual looking to grow with a forward-thinking organization, apply now to be a part of NTT DATA's team in Bangalore, Karnataka, India.,

Posted 2 weeks ago

Apply

20.0 years

0 Lacs

Sholinganallur, Tamil Nadu, India

On-site

About Us For over 20 years, Smart Data Solutions has been partnering with leading payer organizations to provide automation and technology solutions enabling data standardization and workflow automation. The company brings a comprehensive set of turn-key services to handle all claims and claims-related information regardless of format (paper, fax, electronic), digitizing and normalizing for seamless use by payer clients. Solutions include intelligent data capture, conversion and digitization, mailroom management, comprehensive clearinghouse services and proprietary workflow offerings. SDS’ headquarters are just outside of St. Paul, MN and leverages dedicated onshore and offshore resources as part of its service delivery model. The company counts over 420 healthcare organizations as clients, including multiple Blue Cross Blue Shield state plans, large regional health plans and leading independent TPAs, handling over 500 million transactions of varying types annually with a 98%+ customer retention rate. SDS has also invested meaningfully in automation and machine learning capabilities across its tech-enabled processes to drive scalability and greater internal operating efficiency while also improving client results. SDS recently partnered with a leading growth-oriented investment firm, Parthenon Capital, to further accelerate expansion and product innovation. Location : 6th Floor, Block 4A, Millenia Business Park, Phase II MGR Salai, Kandanchavadi , Perungudi Chennai 600096, India. Smart Data Solutions is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, age, marital status, pregnancy, genetic information, or other legally protected status To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed above are representative of the knowledge skill and or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform essential job functions. Due to access to Protected Healthcare Information, employees in this role must be free of felony convictions on a background check report. Responsibilities Duties and Responsibilities include but are not limited to: Design and build ML pipelines for OCR extraction, document image processing, and text classification tasks. Fine-tune or prompt large language models (LLMs) (e.g., Qwen, GPT, LLaMA , Mistral) for domain-specific use cases. Develop systems to extract structured data from scanned or unstructured documents (PDFs, images, TIFs). Integrate OCR engines (Tesseract, EasyOCR , AWS Textract , etc.) and improve their accuracy via pre-/post-processing. Handle natural language processing (NLP) tasks such as named entity recognition (NER), summarization, classification, and semantic similarity. Collaborate with product managers, data engineers, and backend teams to productionize ML models. Evaluate models using metrics like precision, recall, F1-score, and confusion matrix, and improve model robustness and generalizability. Maintain proper versioning, reproducibility, and monitoring of ML models in production. The duties set forth above are essential job functions for the role. Reasonable accommodations may be made to enable individuals with disabilities to perform essential job functions. Skills And Qualifications 4–5 years of experience in machine learning, NLP, or AI roles Proficiency with Python and ML libraries such as PyTorch , TensorFlow, scikit-learn, Hugging Face Transformers. Experience with LLMs (open-source or proprietary), including fine-tuning or prompt engineering. Solid experience in OCR tools (Tesseract, PaddleOCR , etc.) and document parsing. Strong background in text classification, tokenization, and vectorization techniques (TF-IDF, embeddings, etc.). Knowledge of handling unstructured data (text, scanned images, forms). Familiarity with MLOps tools: MLflow , Docker, Git, and model serving frameworks. Ability to write clean, modular, and production-ready code. Experience working with medical, legal, or financial document processing. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate ) and semantic search. Understanding of document layout analysis (e.g., LayoutLM , Donut, DocTR ). Familiarity with cloud platforms (AWS, GCP, Azure) and deploying models at scale

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Senior Information Security Engineer at NTT DATA in Bangalore, Karnataka (IN-KA), India, you will be part of a dynamic team that values exceptional, innovative, and passionate individuals who are eager to grow with us. If you are seeking to join an inclusive, adaptable, and forward-thinking organization, this opportunity is for you. You should have a minimum of 5 years of experience in IT Technology, with at least 2 years of hands-on experience in AI / ML, particularly with a strong working knowledge in neural networks. Additionally, you should possess 2+ years of data engineering experience, preferably using tools such as AWS Glue, Cribl, SignalFx, OpenTelemetry, or AWS Lambda. Proficiency in Python coding, including numpy, vectorization, and Tensorflow, is essential. Moreover, you must have 2+ years of experience in leading complex enterprise-wide integration programs as an individual contributor. Preferred qualifications for this role include a background in Mathematics or Physics and technical knowledge in cloud technologies like AWS, Azure, or GCP. Excellent verbal, written, and interpersonal communication skills are highly valued, as well as the ability to deliver strong customer service. NTT DATA is a $30 billion global innovator that serves 75% of the Fortune Global 100. As a Global Top Employer, we have a diverse team of experts in over 50 countries and a robust partner ecosystem. Our services encompass business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. Join us as we continue to lead in digital and AI infrastructure globally and help organizations navigate confidently into the digital future. If you are ready to contribute your skills and expertise to a leading technology services provider, apply now and be a part of our journey towards innovation, optimization, and transformation for long-term success. Visit us at us.nttdata.com to learn more about our organization and the exciting opportunities we offer.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Key Responsibilities Lead end-to-end software development of our platform with a focus on backend (Laravel) and frontend (ReactJS) Architect, review, and optimize codebases, ensuring best practices in performance, maintainability, and scalability Integrate Generative AI (GenAI) solutions, LLMs, and recommendation systems into our existing product roadmap Guide implementation of AI-assisted development tools like GitHub Copilot and ensure effective use of MLOps pipelines Oversee CI/CD processes using tools such as GitHub Actions and Jenkins; maintain clean DevOps hygiene Lead data pipeline development including vectorization, use of VectorDB, and AI model integration (e.g., Mistral, LLaMA 3) Mentor and review code from junior developers; build a culture of quality, learning, and innovation Conduct feasibility and architectural evaluations for integrating technologies like RAG, vector databases, or MCP protocols Collaborate closely with product and design teams to translate ideas into tangible, efficient tech solutions Must-Have Skills Proven experience in Laravel (PHP) and ReactJS Solid knowledge of cloud infrastructure , especially AWS (EC2, RDS, S3, etc.) Strong understanding of AI/ML integration : fine-tuning models, vectorization, and AI-based app features Hands-on experience with GitHub Copilot or similar AI-powered development tools Familiarity with scraping tools , agent-based automation, and handling proxy rotation/IP masking Experience with CI/CD pipelines (Jenkins, GitHub Actions) Exposure to vector databases (e.g., Pinecone, Weaviate, or similar) Solid understanding of software architectural principles, testing frameworks, and secure coding practices Nice-to-Have Skills Experience with MCP protocols , RAG architecture, or LLM fine-tuning Previous work on AI chatbots or recommendation systems Familiarity with Python (for backend AI workflows) Experience with Scrapy , Composer packages for scraping, or other data ingestion pipelines Knowledge of Nginx configuration and proxy management What You’ll Get Opportunity to lead and shape the technical direction of a growing AI-powered platform Work in a flat, collaborative, and intellectually curious team Exposure to cutting-edge AI tools and research-driven product development Flexible work culture with scope for innovation and ownership

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Senior Python Backend Developer to join our team and take charge of building REST APIs and serverless functions in Azure using Python. As a Senior Python Backend Developer, your main responsibility will be developing high-performance and responsive REST APIs to cater to front-end requests. It will be essential for you to collaborate with team members working on various layers of the application and integrate front-end elements provided by co-workers. Therefore, having a basic understanding of front-end technologies is crucial for this role. Your duties will include delivering top-quality working software independently, writing secure and efficient code, and designing low-latency, high-availability applications. You will also be tasked with integrating user-facing elements developed by front-end developers, implementing security measures, and ensuring data protection. Additionally, your role will involve working with various Azure services like Azure Functions, APIM, Azure storage, SQL, and NoSQL databases, as well as writing automated tests and integrating with Azure APIM, Tracing, and Monitoring services. To excel in this role, you should have experience in building Azure Functions with Python, be proficient in Python with knowledge of at least one Python web framework, and have familiarity with ORM libraries. You should also be capable of integrating multiple data sources and databases into a cohesive system, possess a basic understanding of front-end technologies like JavaScript, HTML5, and CSS3, and have experience with OAuth2/OIDC for securing the backend using Azure AD B2C. Additionally, expertise in Azure services such as Key Vaults, Cost Management, Budgets, Application Insights, Azure Monitor, VNet, etc., fundamental design principles for scalable applications, database schema creation, unit testing, debugging, Git, Postman, Swagger/OpenAPI, Gen-AI, Langchain, Vectorization, LLMs, NoSQL databases like MongoDB, REST APIs, Microservices, and Azure DevOps for CI/CD will be beneficial for this role. If you are passionate about backend development, have a knack for problem-solving collaboratively, and are committed to delivering high-quality software, we invite you to apply for this exciting opportunity to contribute to our team. (Note: This job description is sourced from hirist.tech),

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

About The Role PubMatic is looking for engineers with expertise in Generative AI and AI agent development. You will be responsible for building and optimizing advanced AI agents that leverage the latest technologies in Retrieval-Augmented Generation (RAG), vector databases, and large language models (LLMs). You will work on developing state-of-the-art solutions that enhance Generative AI capabilities and enable our platform to handle complex information retrieval, contextual generation, and adaptive interactions.. What You'll Do Be the decision maker for using right set of tools & technology to solve specific problems. Guide & Mentor different team members for using generative AI tools and help the teams build various agents. Provide technical leadership and mentorship to engineering teams while collaborating with architects, product managers, and UX designers to create innovative AI solutions that address complex customer challenges. Lead the design, development, and deployment of AI-driven features. Drive end-to-end ownership—from feasibility analysis and design specifications to execution and release—while ensuring quick iterations based on customer feedback in a fast-paced Agile environment. Spearhead technical design meetings and produce detailed design documents that outline scalable, secure, and robust AI architectures. Ensure that the solutions are aligned with long-term product strategy and technical roadmaps. Implement and optimize LLMs for specific use cases, including fine-tuning models, deploying pre-trained models, and evaluating their performance. Develop AI agents powered by RAG systems, integrating external knowledge sources to improve the accuracy and relevance of generated content. Design, implement, and optimize vector databases (e.g., FAISS, Pinecone, Weaviate) for efficient and scalable vector search, and work on various vector indexing algorithms. Create sophisticated prompts and fine-tune them to improve the performance of LLMs in generating precise and contextually relevant responses. Utilize evaluation frameworks and metrics (e.g., Evals) to assess and improve the performance of generative models and AI systems. Work with data scientists, engineers, and product teams to integrate AI-driven capabilities into customer-facing products and internal tools. Stay up to date with the latest research and trends in LLMs, RAG, and generative AI technologies to drive innovation in the company’s offerings. Continuously monitor and optimize models to improve their performance, scalability, and cost efficiency. We'd Love for You to Have Strong understanding of large language models and their underlying principles, including transformer architecture and hyper-parameter tuning. Proven experience building AI agents with Retrieval-Augmented Generation to enhance model performance using external data sources (documents, databases). In-depth knowledge of vector databases, vector indexing algorithms, and experience with technologies like FAISS, Pinecone, Weaviate, or Milvus. Ability to craft complex prompts to guide the output of LLMs for specific use cases, enhancing model understanding and contextuality. Familiarity with Evals and other performance evaluation tools for measuring model quality, relevance, and efficiency. Proficiency in Python and experience with machine learning libraries such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience with data preprocessing, vectorization, and handling large-scale datasets. Ability to present complex technical ideas and results to both technical and non-technical stakeholders. Curiosity to learn new things and be up to date with market trends in Gen AI technology. Nice-to-Have Experience in building AI agents using graph-based architectures, including knowledge graph embeddings and graph neural networks (GNNs). Experience with training small base models using custom data, including data collection, pre-processing, and fine-tuning models to specific domains or tasks. Familiarity with deploying AI models on cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Publication or contributions to research in AI, LLMs, or related fields. Proven record of building enterprise scale generative AI application with specific emphasis on accuracy & cost. Qualifications Should have a bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University. Additional Information Return to Office : PubMatic employees throughout the global have returned to our offices via a hybrid work schedule (3 days “in office” and 2 days “working remotely”) that is intended to maximize collaboration, innovation, and productivity among teams and across functions. Benefits : Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement. As well, when we’re back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more! Diversity and Inclusion : PubMatic is proud to be an equal opportunity employer; we don’t just value diversity, we promote and celebrate it. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. About PubMatic PubMatic is one of the world’s leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes. Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary Location -Hyderabad Experiecne -4-7 Yrs. Skills Required Strong programming skills in Python, Java, SpringBoot, or Scala. Experience with ML frameworks like TensorFlow, PyTorch, XGBoost, TensorFlow or LightGBM. Familiarity with information retrieval techniques (BM25, vector search, learning to rank). Knowledge of embedding models, user/item vectorization, or session based personalization. Experience with large scale distributed systems (e.g., Spark, Kafka, Kubernetes). Hands on experience with real time ML systems. Background in NLP, graph neural networks, or sequence modeling. Experience with A/B testing frameworks and metrics like NDCG, MAP, or CTR.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of science and engineering to make possible the next generations of technology, join us to Make Possible® a Better Future. What We Offer Location: Bangalore,IND, Chennai,IND At Applied, we prioritize the well-being of you and your family and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits. You’ll also benefit from a supportive work culture that encourages you to learn, develop and grow your career as you take on challenges and drive innovative solutions for our customers. We empower our team to push the boundaries of what is possible—while learning every day in a supportive leading global company. Visit our Careers website to learn more about careers at Applied. Software Architect About Applied Applied Materials is the leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. Our expertise in modifying materials at atomic levels and on an industrial scale enables customers to transform possibilities into reality. At Applied Materials, our innovations make possible the technology shaping the future. Our Team Our team is developing a high-performance computing solution for low-latency and high throughput image processing and deep-learning workload that enables our Chip Manufacturing process control equipment to offer differentiated value to our customers. Your Opportunity As an architect, you will get the opportunity to grow in the field of high-performance computing, complex system design and low-level optimizations for better cost of ownership. Roles and Responsibility As a Software Architect, you will be responsible for designing and implementing High performance computing software solutions for our organization. You will work closely with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand requirements and translate them into architectural/software designs that meet business needs. You will be coding and developing quick prototypes to establish your design with real code and data. You will be a subject Matter expert to unblock software engineers in the HPC domain. You will be expected to profile systems to understand bottlenecks, optimize workflows and code and processes to improve cost of ownership. Conduct technical reviews and provide guidance to software engineers during the development process. Identify and mitigate technical risks and issues throughout the software development lifecycle. Evaluate and recommend appropriate technologies and frameworks to meet project requirements. Lead the design and implementation of complex software components and systems. Ensure that software systems are scalable, reliable, and maintainable. Mentor and coach junior software architects and engineers. Your primary focus will be on ensuring that the software systems are scalable, reliable, maintainable and cost effective. Our Ideal Candidate Someone who has the drive and passion to learn quickly, has the ability to multi-task and switch contexts based on business needs. Qualifications 7 to 15 years of experience in Design and coding in C/C++ preferably in Linux Environment. Very good knowledge Data structure and Algorithms and complexity analysis. Experience in developing Distributed High Performance Computing software using Parallel programming frameworks like MPI, UCX etc. In depth experience in Multi-threading, Thread Synchronization, Inter process communication, and distributed computing fundamentals. Very Good knowledge of Computer science fundamentals like, Operating systems internals (Linux Preferred), Networking and Storage systems. Experience in performance profiling at application and system level (e.g. vtune, Oprofiler, perf, Nividia Nsight etc.) Experience in low level code optimization techniques using Vectorization and Intrinsics, cache-aware programming, lock free data structures etc. Experience in GPU programming using CUDA, OpenMP, OpenACC, OpenCL etc. Familiarity with microservices architecture and containerization technologies (docker/singularity) and low latency Message queues. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to mentor and coach junior team members. Experience in Agile development methodologies. Additional Qualifications Experience in HPC Job-Scheduling and Cluster Management Software (SLURM, Torque, LSF etc.) Good knowledge of Low-latency and high-throughput data transfer technologies (RDMA, RoCE, InfiniBand) Good Knowledge of Work-flow orchestration Software like Apache Airflow, Apache Spark, Apache storm or Intel TBB flowgraph etc. Education Bachelor's Degree or higher in Computer science or related Disciplines. Years Of Experience 7 - 15 Years Additional Information Time Type: Full time Employee Type Assignee / Regular Travel Yes, 10% of the Time Relocation Eligible: Yes Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: AI Engineer / Developer Location: Gurugram Job Type: Full-Time Shift: EST shift (7pm IST-4am IST) Company Overview: Serigor Inc is Maryland based, CMMI L3, Woman Owned Small Business (WOSB) specializing in IT Services, IT Staff Augmentation, Government Solutions and Global Delivery. Founded in 2009, we are a leading IT services firm that delivers deep expertise, objective insights, a tailored approach and unparalleled collaboration to help US government agencies and Fortune 500 companies confidently face the future while increasing the efficiency of their current operations. Our professional services primarily focus on ITS services portfolio including but not limited to Managed IT Services, Enterprise Application Development, Testing and Management Consulting, Salesforce, Cloud and Infrastructure Consulting, DevOps Consulting, Migration Consulting, Service Management, Custom Implementation and IT Operations & Maintenance, Remote Application & Infrastructure Monitoring and Management practices. Position Overview: We are seeking a talented and self-driven AI Engineer / Developer to join our team and contribute to cutting-edge projects involving large language models (LLMs) and document intelligence. This role offers the flexibility to work remotely and can be structured as full-time or part-time, depending on your availability and interest. You will play a critical role in leveraging generative AI to extract structured insights from unstructured content, refine prompt engineering strategies, and build functional prototypes that bridge AI outputs with real-world applications. If you're passionate about NLP, LLMs, and building AI-first solutions, we want to hear from you. Key Responsibilities: Document Intelligence : Leverage large language models (e.g., OpenAI GPT, Anthropic Claude) to analyze and extract meaningful information from various types of documents, including PDFs, contracts, compliance records, and reports. Data Structuring : Convert natural language outputs into structured data formats such as JSON, tables, custom templates, or semantic tags for downstream integration. Prompt Engineering : Design, write, and iterate on prompts to ensure high-quality, repeatable, and reliable responses from AI models. Tooling & Prototyping : Develop lightweight tools, scripts, and workflows (using Python or similar) to automate, visualize, and test AI interactions. Model Evaluation : Run controlled experiments to evaluate the performance of AI-generated outputs, identifying gaps, edge cases, and potential improvements. Pipeline Integration : Collaborate with software engineers and product teams to integrate LLM pipelines into broader applications and systems. Traceability & Transparency : Ensure each piece of extracted information can be traced back to its original source within the document for auditing and validation purposes. Required Skills & Qualifications: Experience : Minimum of 3 years in AI/ML development, with a strong focus on natural language processing (NLP) , document analysis , or conversational AI . LLM Expertise : Hands-on experience working with large language models (e.g., GPT-4, Claude, Mistral) and prompt-based interactions. Programming Skills : Proficient in Python and experienced with modern AI frameworks such as LangChain , Hugging Face Transformers , or spaCy . Document Processing : Knowledge of embeddings , chunking strategies , and vectorization techniques for efficient document indexing and retrieval. Vector Databases : Familiarity with FAISS , Chroma , Pinecone , or similar vector DBs for storing and querying embedding data. Analytical Mindset : Strong ability to design, run, and interpret structured tests to measure and enhance the accuracy of AI outputs. Preferred Qualifications: RAG Workflows : Experience implementing Retrieval-Augmented Generation (RAG) systems for dynamic document querying and synthesis. Domain Exposure : Familiarity with legal , regulatory , or compliance-based documents and the unique challenges they pose. LLMOps & Deployment : Exposure to deploying AI models or pipelines, including experience with web APIs , LLMOps tooling , or cloud-native AI environments .

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

India

On-site

Job Information Department Name Platforms & Compilers Job Type Full time Date Opened 04/07/2025 Industry Software Development Minimum Experience In Years 4 Maximum Experience In Years 6 City Ramapuram Province Tamil Nadu Country India Postal Code 600089 About Us MulticoreWare is a global software solutions & products company with its HQ in San Jose, CA, USA. With worldwide offices, it serves its clients and partners in North America, EMEA and APAC regions. Started by a group of researchers, MulticoreWare has grown to serve its clients and partners on HPC & Cloud computing, GPUs, Multicore & Multithread CPUS, DSPs, FPGAs and a variety of AI hardware accelerators. MulticoreWare was founded by a team of researchers that wanted a better way to program for heterogeneous architectures. With the advent of GPUs and the increasing prevalence of multi-core, multi-architecture platforms, our clients were struggling with the difficulties of using these platforms efficiently. We started as a boot-strapped services company and have since expanded our portfolio to span products and services related to compilers, machine learning, video codecs, image processing and augmented/virtual reality. Our hardware expertise has also expanded with our team; we now employ experts on HPC and Cloud Computing, GPUs, DSPs, FPGAs, and mobile and embedded platforms. We specialize in accelerating software and algorithms, so if your code targets a multi-core, heterogeneous platform, we can help. Job Description We are seeking a talented engineer to implement and optimize machine learning, computer vision, and numeric libraries for target hardware architecture, including CPUs, GPUs, DSPs, and other accelerators. Your expertise will be instrumental in enabling efficient and high-performance execution of algorithms on these hardware platforms. Key Responsibilities: Implement and optimize machine learning, computer vision, and numeric libraries for target hardware architectures, including CPUs, GPUs, DSPs, and other accelerators. Work closely with software and hardware engineers to ensure optimal performance on target platforms. Implement low-level optimizations, including algorithmic modifications, parallelization, vectorization, and memory access optimizations, to fully leverage the capabilities of the target hardware architectures. Work with customers to understand their requirements and implement libraries to meet their needs. Develop performance benchmarks and conduct performance analysis to ensure the optimized libraries meet the required performance targets. Stay current with the latest advancements in machine learning, computer vision, and high-performance computing. Qualifications: BTech/BE/MTech/ME/MS/PhD degree in CSE/IT/ECE > 4 years of experience working in Algorithm Development, Porting, Optimization & Testing Proficient in programming languages such as C/C++, CUDA, OpenCL, or other relevant languages for hardware optimization. Hands-on experience with hardware architectures, including CPUs, GPUs, DSPs, and accelerators, and familiarity with their programming models and optimization techniques. Knowledge of parallel computing, SIMD instructions, memory hierarchies, and cache optimization techniques. Experience with performance analysis tools and methodologies for profiling and optimization. Knowledge of deep learning frameworks and techniques is good to have Strong problem-solving skills and ability to work independently or within a team.

Posted 4 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies