Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: RPA Developer (UiPath / Power Automate) – 3-5 Years Experience Location: Gurugram Experience: 3–5 Years Job Summary: We are seeking a skilled and detail-oriented RPA Developer with 3–5 years of experience in automation development, particularly using UiPath and Power Automate. The ideal candidate will have hands-on experience in automating Oracle ERP processes and document processing workflows. A strong understanding of OCR technologies, file management automation, and enterprise-grade RPA deployment is essential. Key Responsibilities: Design, develop, and deploy RPA solutions using UiPath Studio, Orchestrator, and other UiPath components. Build automation workflows using Power Automate for business process optimization. Automate Oracle ERP processes, especially invoice processing and document workflows. Implement file management and human task automation solutions. Automate Excel-based tasks to ensure data accuracy and operational efficiency. Develop advanced web automation using OCR, image search, Textract, keystrokes, and mouse click automation. Configure and maintain UiPath environments across development, testing, and production. Extract data from handwritten documents, forms, tables, invoices, receipts, and scanned images using tools like Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Collaborate with cross-functional teams to identify automation opportunities and deliver scalable solutions. Ensure high-quality documentation and adherence to best practices in RPA development. Required Skills & Qualifications: 3–5 years of hands-on experience in RPA development using UiPath and Power Automate. Proficiency in Oracle ERP systems, especially in invoice and document processing. Strong knowledge of document processing tools and OCR technologies. Experience with Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Expertise in Excel automation and web-based automation techniques. Solid understanding of RPA deployment, monitoring, and maintenance. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Preferred Qualifications: UiPath RPA Developer Certification. Experience in enterprise-level RPA deployments. Familiarity with Agile/Scrum methodologies.
Posted 1 month ago
4.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Role & Responsibilities 4+ years of experience applying AI to practical uses. Develop and train computer vision models for tasks like: Object detection and tracking (YOLO, Faster R-CNN, etc.) Image classification, segmentation, OCR (e.g., PaddleOCR, Tesseract). Face recognition/blurring, anomaly detection, etc. Optimize models for performance on edge devices (e.g., NVIDIA Jetson, OpenVINO, TensorRT). Process and annotate image/video datasets; apply data augmentation techniques. Proficiency in Large Language Models. Strong understanding of statistical analysis and machine learning algorithms. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Understanding of image processing concepts (thresholding, contour detection, transformations, etc.) Experience in model optimization, quantization, or deploying to edge (Jetson Nano/Xavier, Coral, etc.) Strong programming skills in Python (or C++), with expertise in: Implement and optimize machine learning pipelines and workflows for seamless integration into production systems. Hands-on experience with at least one real-time CV application (e.g., surveillance, retail analytics, industrial inspection, AR/VR). OpenCV, NumPy, PyTorch/TensorFlow. Computer vision models like YOLOv5/v8, Mask R-CNN, DeepSORT. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead the implementation of large language models in AI applications. Research and apply cutting-edge AI techniques to enhance system performance. Contribute to the development and deployment of AI solutions across various domains. Requirements Design, develop, and deploy ML models for: OCR-based text extraction from scanned documents (PDFs, images). Table and line-item detection in invoices, receipts, and forms. Named entity recognition (NER) and information classification. Evaluate and integrate third-party OCR tools (e.g., Tesseract, Google Vision API, AWS Textract, Azure OCR,PaddleOCR, EasyOCR). Develop pre-processing and post-processing pipelines for noisy image/text data. Familiarity with video analytics platforms (e.g., DeepStream, Streamlit-based dashboards). Experience with MLOps tools (MLflow, ONNX, Triton Inference Server). Background in academic CV research or published papers. Knowledge of GPU acceleration, CUDA, or hardware integration (cameras, sensors). (ref:hirist.tech)
Posted 1 month ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 1 month ago
3.0 years
0 Lacs
India
On-site
Job Title Junior‑Strong / Mid‑Level Python Developer We need an enthusiastic Python developer who already codes confidently but wants to deepen their cloud and data‑engineering skill‑set alongside experienced Architects and AI specialists. The Tech You’ll Touch • Python 3.10+ (typing, dataclasses, pathlib, asyncio where useful) • AWS serverless – Lambda, Step Functions, S3, DynamoDB, , SES, SNS, SQS, Glue jobs, CloudWatch • Data tooling – pandas, pyarrow, boto3, SQLAlchemy, Pydantic, pytest • APIs & IaC – AWS API Gateway/Lambda‑proxy; Terraform or AWS SAM/CDK (guided by Cloud Engineer) • Nice to have / you can learn on the job – Amazon Textract/Bedrock, QuickSight dashboards, Docker, basic ML familiarity for future phases. About You • 0‑3 years’ hands‑on Python building production services or data pipelines (not just scripts). • Comfortable translating JSON/CSV/XLSX inputs into validated domain objects and relational/NoSQL schemas. • Some real AWS exposure – you have deployed or at least prototyped Lambda + S3 solutions and understand IAM basics. • Solid grasp of Git workflow, testing; you care about readable, maintainable code. • Analytical mindset, able to work from high‑level specs and iterate quickly in an agile team. • Bonus: experience with pricing, trading, supply‑chain or e‑commerce data; LLM / NLP curiosity; exposure to event‑driven or serverless architectures at scale. Compensation: ₹80,000 - ₹1,00,000 per month
Posted 1 month ago
4.0 years
3 - 5 Lacs
Vadodara
On-site
Role & Responsibilities 4+ years of experience applying AI to practical uses Develop and train computer vision models for tasks like: Object detection and tracking (YOLO, Faster R-CNN, etc.) Image classification, segmentation, OCR (e.g., PaddleOCR, Tesseract) Face recognition/blurring, anomaly detection, etc. Optimize models for performance on edge devices (e.g., NVIDIA Jetson, OpenVINO, TensorRT). Process and annotate image/video datasets; apply data augmentation techniques. Proficiency in Large Language Models. Strong understanding of statistical analysis and machine learning algorithms. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Understanding of image processing concepts (thresholding, contour detection, transformations, etc.) Experience in model optimization, quantization, or deploying to edge (Jetson Nano/Xavier, Coral, etc.) Strong programming skills in Python (or C++), with expertise in: Implement and optimize machine learning pipelines and workflows for seamless integration into production systems. Hands-on experience with at least one real-time CV application (e.g., surveillance, retail analytics, industrial inspection, AR/VR). OpenCV, NumPy, PyTorch/TensorFlow Computer vision models like YOLOv5/v8, Mask R-CNN, DeepSORT Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead the implementation of large language models in AI applications. Research and apply cutting-edge AI techniques to enhance system performance. Contribute to the development and deployment of AI solutions across various domains Requirements Design, develop, and deploy ML models for: OCR-based text extraction from scanned documents (PDFs, images) Table and line-item detection in invoices, receipts, and forms Named entity recognition (NER) and information classification Evaluate and integrate third-party OCR tools (e.g., Tesseract, Google Vision API, AWS Textract, Azure OCR,PaddleOCR, EasyOCR) Develop pre-processing and post-processing pipelines for noisy image/text data Familiarity with video analytics platforms (e.g., DeepStream, Streamlit-based dashboards). Experience with MLOps tools (MLflow, ONNX, Triton Inference Server). Background in academic CV research or published papers. Knowledge of GPU acceleration, CUDA, or hardware integration (cameras, sensors).
Posted 1 month ago
4.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Role & Responsibilities 4+ years of experience applying AI to practical uses Develop and train computer vision models for tasks like: Object detection and tracking (YOLO, Faster R-CNN, etc.) Image classification, segmentation, OCR (e.g., PaddleOCR, Tesseract) Face recognition/blurring, anomaly detection, etc. Optimize models for performance on edge devices (e.g., NVIDIA Jetson, OpenVINO, TensorRT). Process and annotate image/video datasets; apply data augmentation techniques. Proficiency in Large Language Models. Strong understanding of statistical analysis and machine learning algorithms. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Understanding of image processing concepts (thresholding, contour detection, transformations, etc.) Experience in model optimization, quantization, or deploying to edge (Jetson Nano/Xavier, Coral, etc.) Strong programming skills in Python (or C++), with expertise in: Implement and optimize machine learning pipelines and workflows for seamless integration into production systems. Hands-on experience with at least one real-time CV application (e.g., surveillance, retail analytics, industrial inspection, AR/VR). OpenCV, NumPy, PyTorch/TensorFlow Computer vision models like YOLOv5/v8, Mask R-CNN, DeepSORT Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead the implementation of large language models in AI applications. Research and apply cutting-edge AI techniques to enhance system performance. Contribute to the development and deployment of AI solutions across various domains Requirements Design, develop, and deploy ML models for: OCR-based text extraction from scanned documents (PDFs, images) Table and line-item detection in invoices, receipts, and forms Named entity recognition (NER) and information classification Evaluate and integrate third-party OCR tools (e.g., Tesseract, Google Vision API, AWS Textract, Azure OCR,PaddleOCR, EasyOCR) Develop pre-processing and post-processing pipelines for noisy image/text data Familiarity with video analytics platforms (e.g., DeepStream, Streamlit-based dashboards). Experience with MLOps tools (MLflow, ONNX, Triton Inference Server). Background in academic CV research or published papers. Knowledge of GPU acceleration, CUDA, or hardware integration (cameras, sensors).
Posted 1 month ago
10.0 years
0 Lacs
India
On-site
Position title Document Processing & Automation Specialist Employment Type ☐Full-Time ☐ Part-Time ☒Contract Duration 3 Months – C2H Shift Timings 12PM-9PM About Client The client is a comprehensive healthcare organization that specializes in providing on-site medical and dental services to residents in long-term care, skilled nursing, and assisted living facilities. Their diverse network comprises skilled doctors, providers, industry-expert executives, and dedicated support staff, all committed to delivering high-quality care to underserved populations Job Description We are seeking a highly motivated and detail-oriented Document Processing & Automation Specialist with expertise in computer vision, particularly using OpenCV. In this role, you will be instrumental in enhancing the accuracy, reliability, and scalability of our automated document analysis systems. Your contributions will directly impact operational efficiency, data integrity, and the success of our broader digital transformation initiatives. This is a unique opportunity to apply your technical proficiency to real-world challenges and drive meaningful innovation within our organization Job Responsibilities Design, develop, and optimize computer vision algorithms for document classification, segmentation, and data extraction Work extensively with OpenCV and other image processing libraries to detect regions of interest, remove noise, and improve OCR readiness Conduct performance tuning and error analysis to improve the accuracy and throughput of the document automation system. Utilize OpenAI, Azure OpenAI, or similar LLM APIs for intelligent content extraction, reasoning, and validation from OCR text. Fine-tune and optimize AI models using prompt engineering, few-shot examples, and structured input/output patterns Assist in deploying models to production environments and monitor their performance. Monitor and analyze system performance, applying continuous improvements to ensure high accuracy and throughput Total Experience - 10+ years Min Experience - 8 years Domain - HealthCare Relevant Experience - 5 years Must have skills Proficiency with OpenCV for tasks like segmentation, denoising, bounding box detection, contour analysis Strong understanding of image preprocessing for OCR readiness. Python (advanced level) with experience in NumPy, Pandas, and file I/O operations. Ability to write clean, modular, and testable code. Experience working with at least one OCR engine such as Tesseract, Azure OCR, AWS Textract, or Google Vision Hands-on experience with OpenAI APIs (e.g., GPT-3.5, GPT-4) or Azure OpenAI Service. Prompt engineering skills for structured and unstructured data extraction Nice to have skills Basic NLP techniques (e.g., entity recognition, regex-based extraction, text cleaning). Experience with tools like spaCy, HuggingFace, or LangChain. Exposure to training or fine-tuning machine learning models (e.g., SVM, CNN, BERT). Experience deploying solutions on Azure, AWS, or GCP
Posted 1 month ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview We don’t simply hire employees. We invest in them. When you work at Chatham, we empower you — offering professional development opportunities to help you grow in your career, no matter if you've been here for five months or 15 years. Chatham has worked hard to create a distinct work environment that values people, teamwork, integrity, and client service. You will have immediate opportunities to partner with talented subject matter experts, work on complex projects, and contribute to the value Chatham delivers every day. As a Manager of the Loan Data Extraction team specializing in institutional real estate clients, your primary responsibility will be to manage the team who will review and extract data from various types of real estate source documents, such as loan agreements, promissory notes, and guarantees, as a pivotal process in modeling debt portfolios for our clients. You will use your expertise to ensure data is complete, accurate, and timely. You should have a background in real estate investment or data management. You should also have exceptional attention to detail, with the ability to identify and resolve discrepancies or errors in data as well as strong analytical skills with the ability to review and extract data from various types of real estate source documents. You will report to Managing Director – India. In This Role You Will Lead the Loan Data Extraction team who will review and extract information from various types of real estate source documents, such as loan agreements and promissory notes, to model loan cashflows, extension details, and prepayment optionality. Collaborate with internal team members and other stakeholders to ensure that projects and deliverables are completed on time and to the satisfaction of clients. Communicate effectively with internal team members and other stakeholders, using strong verbal and written communication skills to convey complex ideas and information associated with the data extraction and quality assurance process. Complete internal training modules to gain critical skills and knowledge needed to complete extraction responsibilities efficiently and effectively. Create and monitor Quality metrics and ensure employee feedback is objective based on SMART goals. Create and maintain updated documentation: Standard Operating Procedures, Process Maps, Defect Definition, and Training Materials. Focus on process improvement and automation initiatives. Your Impact As Manager, you will oversee the Loan Data Extraction process for a client or multiple clients, ensuring that institutional real estate investors receive high-quality, accurate, and timely data solutions. Your leadership will be critical in managing the team’s performance, driving improvements in processes, and ensuring that all deliverables meet the high standards expected by our clients. Contributors To Your Success Post Graduate degree in Commerce, Accounting, Finance, or related fields. 10+ years of experience in financial document processing, credit analysis, loan operations, or a similar field. Proven experience leading a team and managing extraction or operations projects. Strong understanding of loan structures, credit agreements, and key financial covenants. Familiarity with AI/ML tools used for data extraction (e.g., AWS Textract, Google Document AI, Kira, Hyperscience) is a strong advantage. Leadership: Ability to lead and mentor a team while ensuring quality and adherence to processes. Attention to Detail – Precision is critical when extracting loan terms, interest rates, borrower details, and covenants to avoid costly errors. Understanding of Loan Documents – Familiarity with credit agreements, promissory notes, and term sheets helps in accurately identifying and interpreting relevant data. Data Entry Speed and Accuracy – Efficiently inputting data into systems without mistakes ensures smooth downstream processing and compliance. Critical Thinking & Pattern Recognition – Spotting inconsistencies, missing information, or potential red flags requires an analytical mindset. Effective communication skills – Ability to convey complex ideas and information (verbally or in writing) to internal team members and other stakeholders. Real estate familiarity – Experience working with institutional real estate data or clients is a plus. About Chatham Financial Chatham Financial is the largest independent financial risk management advisory and technology firm. A leader in debt and derivative solutions, Chatham provides clients with access to in-depth knowledge, innovative tools, and an incomparable team of over 750 employees to help mitigate risks associated with interest rate, foreign currency, and commodity exposures. Founded in 1991, Chatham serves more than 3,500 companies across a wide range of industries — handling over $1.5 trillion in transaction volume annually and helping businesses maximize their value in the capital markets, every day. To learn more, visit chathamfinancial.com. Chatham Financial is an equal opportunity employer. #LA-onsite #LA
Posted 1 month ago
3.0 years
5 - 25 Lacs
India
On-site
Full Stack AI Developer Position SummaryWe are seeking a skilled Full Stack AI Developer with 3+ years of experience to join our dynamic development team. The ideal candidate will have expertise in both frontend and backend development, with specialized knowledge in building intelligent agent-based systems and user interfaces that deliver seamless AI-powered experiences.Key Responsibilities: Design and develop end-to-end AI agent-based solutions spanning backend intelligence and frontend user experiences Build scalable backend systems using Python and modern frameworks for AI processing and data management Integrate with external APIs, including Microsoft Graph API and other third-party services Implement robust API security measures using OAuth and other authentication protocols Create and maintain comprehensive API documentation following OpenAPI standards Develop interactive dashboards and user interfaces that showcase AI capabilities Collaborate with cross-functional teams to translate business requirements into full-stack technical solutions Implement data processing pipelines, machine learning models, and vector-based search systems Deploy and maintain applications using cloud services and containerization Participate in code reviews and maintain high-quality coding standards across the entire stack Continuously learn and adopt new technologies to enhance full-stack development capabilities.Experience: 3+ years of full-stack software development experience Proven experience in both frontend and backend development with real-world project implementations .Technical Skills - Backend Development: Python: Strong proficiency with hands-on development experience FastAPI: Experience building REST APIs and web services PyMongo: Working knowledge of MongoDB integration Vector Stores & Embeddings: Experience with vector databases and generating embeddings in MongoDB Pandas: Data manipulation and analysis capabilities AWS Services: AWS CLI, Textract, S3, Lambda.Frontend Development: Web Technologies: Proficiency in HTML, CSS, JavaScript, and ReactJS for frontend development.API & Integration: API Integration: Experience with external APIs, particularly Microsoft Graph API API Security: Knowledge of OAuth and other authentication/authorization technologies for securing APIs API Documentation: Experience generating and maintaining API documentation using OpenAPI standard.Development Tools: Version Control: GitHub or GitLab for collaborative development Docker: Containerization and deployment experience. Job Type: Full-time Pay: ₹500,000.00 - ₹2,500,000.00 per year Work Location: In person
Posted 1 month ago
4.0 - 5.0 years
6 - 8 Lacs
Gurgaon
On-site
Project description We are looking for a skilled Document AI / NLP Engineer to develop intelligent systems that extract meaningful data from documents such as PDFs, scanned images, and forms. In this role, you will build document processing pipelines using OCR and NLP technologies, fine-tune ML models for tasks like entity extraction and classification, and integrate those solutions into scalable cloud-based applications. You will collaborate with cross-functional teams to deliver high-performance, production-ready pipelines and stay up to date with advancements in the document understanding and machine learning space. Responsibilities Design, build, and optimize document parsing pipelines using tools like Amazon Textract, Azure Form Recognizer, or Google Document AI. Perform data preprocessing, labeling, and annotation for training machine learning and NLP models. Fine-tune or train models for tasks such as Named Entity Recognition (NER), text classification, and layout understanding using PyTorch, TensorFlow, or HuggingFace Transformers. Integrate document intelligence capabilities into larger workflows and applications using REST APIs, microservices, and cloud components (e.g., AWS Lambda, S3, SageMaker). Evaluate model and OCR accuracy, applying post-processing techniques or heuristics to improve precision and recall. Collaborate with data engineers, DevOps, and product teams to ensure solutions are robust, scalable, and meet business KPIs. Monitor, debug, and continuously enhance deployed document AI solutions. Maintain up-to-date knowledge of industry trends in OCR, Document AI, NLP, and machine learning. Skills Must have 4-5 years of hands-on experience in machine learning, document AI, or NLP-focused roles. Strong expertise in OCR tools and frameworks, especially Amazon Textract, Azure Form Recognizer, Google Document AI, or open-source tools like Tesseract, LayoutLM, or PaddleOCR. Solid programming skills in Python and familiarity with ML/NLP libraries: scikit-learn, spaCy, transformers, PyTorch, TensorFlow, etc. Experience working with structured and unstructured data formats, including PDF, images, JSON, and XML. Hands-on experience with REST APIs, microservices, and integrating ML models into production pipelines. Working knowledge of cloud platforms, especially AWS (S3, Lambda, SageMaker) or their equivalents. Understanding of NLP techniques such as NER, text classification, and language modeling. Strong debugging, problem-solving, and analytical skills. Clear verbal and written communication skills for technical and cross-functional collaboration. Nice to have N/A Other Languages English: B2 Upper Intermediate Seniority Senior Gurugram, India Req. VR-116250 AI/ML BCM Industry 29/07/2025 Req. VR-116250
Posted 1 month ago
3.0 years
10 - 25 Lacs
India
On-site
Full Stack AI Developer Summary We are seeking a skilled Full Stack AI Developer with 3+ years of experience to join our dynamic development team. The ideal candidate will have expertise in both frontend and backend development, with specialized knowledge in building intelligent agent-based systems and user interfaces that deliver seamless AI-powered experiences. Responsibilities: Design and develop end-to-end AI agent-based solutions spanning backend intelligence and frontend user experiences Build scalable backend systems using Python and modern frameworks for AI processing and data management Integrate with external APIs, including Microsoft Graph API and other third-party services Implement robust API security measures using OAuth and other authentication protocols Create and maintain comprehensive API documentation following OpenAPI standards Develop interactive dashboards and user interfaces that showcase AI capabilities Collaborate with cross-functional teams to translate business requirements into full-stack technical solutions Implement data processing pipelines, machine learning models, and vector-based search systems Deploy and maintain applications using cloud services and containerization Participate in code reviews and maintain high-quality coding standards across the entire stack Continuously learn and adopt new technologies to enhance full-stack development capabilities Experience: 3+ years of full-stack software development experience Proven experience in both frontend and backend development with real-world project implementations Technical Skills Backend Development: Python: Strong proficiency with hands-on development experience FastAPI: Experience building REST APIs and web services PyMongo: Working knowledge of MongoDB integration Vector Stores & Embeddings: Experience with vector databases and generating embeddings in MongoDB Pandas: Data manipulation and analysis capabilities AWS Services: AWS CLI, Textract, S3, Lambda.Frontend Development: Web Technologies: Proficiency in HTML, CSS, JavaScript, and ReactJS for frontend development.API & Integration: API Integration: Experience with external APIs, particularly Microsoft Graph API API Security: Knowledge of OAuth and other authentication/authorization technologies for securing APIs API Documentation: Experience generating and maintaining API documentation using OpenAPI standard.Development Tools: Version Control: GitHub or GitLab for collaborative development Docker: Containerization and deployment experience. Job Type: Full-time Pay: ₹1,000,000.00 - ₹2,500,000.00 per year Work Location: In person
Posted 1 month ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Role Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience 10 to 18 years Job Reference Number 12895
Posted 1 month ago
20.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Position: AI Engineer (Python, LLMs, Vector Databases, OCR, Image Generation) Location: Remote (Delhi or Bengaluru) About our Company We are an advanced Learning Management System (LMS) and authoring tool provider, built to meet the diverse training and organizational needs of businesses across industries. Operating under LM Solutions India Pvt. Ltd. we are part of SVI, a leading learning solutions provider headquartered in Texas, USA . With over 20 years of experience, we have established strong partnerships with clients in consulting, healthcare, retail, banking, aviation, and more. Our small but growing team of skilled professionals works closely with global clients to create engaging learning experiences and implement effective talent development strategies that drive real business outcomes. We are also at the forefront of integrating AI into learning solutions. Our technology stack is built on Python, leveraging LlamaIndex to power our retrieval-augmented generation (RAG) pipelines with OpenAI and other large language models (LLMs). We work extensively with vector databases, optical character recognition (OCR) tools, and AI-driven image generation to transform document-based content into structured, interactive micro-learning experiences. While we don’t host or train our own AI models, we focus on seamlessly integrating best-in-class AI services into our workflows. As we continue to scale, we are seeking passionate individuals to help us enhance and expand our AI-powered capabilities. Role & Responsibilities · Enhance and optimize our RAG pipeline, integrating various LLMs via Open AI and other providers. · Work with vector databases (e.g., Pinecone, FAISS, We aviate, Milvus) to store and retrieve embeddings efficiently. · Implement OCR solutions to extract text from images (e.g., Tesseract, Azure/AWS OCR, Google Vision). · Integrate image generation models (e.g., Stable Diffusion, Open AI DALL·E) into our content pipeline. · Improve document parsing and content structuring using LlamaIndex. · Work within a cloud-based AI infrastructure (no on-prem hosting). · Collaborate with our product team to create AI-enhanced interactive learning experiences. · Ensure performance, scalability, and security of AI-driven workflows. Requirements · Proficiency in Python and experience with AI/ML libraries (e.g., llama-index, langchain, transformers). · Hands-on experience with vector databases for similarity search. · Strong background in OCR technologies (e.g., Tesseract, AWS Textract, Google Vision, OpenCV). · Familiarity with image generation models (e.g., OpenAI DALL·E, Stability AI). · Experience integrating LLMs via APIs (Open AI, Anthropic, etc.). · Comfortable working with cloud-based AI services (AWS, Azure, GCP). · Strong problem-solving skills and the ability to work independently. Nice-to-Have · Experience with document processing pipelines (e.g., PDFs, Word documents). · Background in AI-powered content recommendation or learning platforms. · Familiarity with serverless architectures (e.g., AWS Lambda, Azure Functions).
Posted 1 month ago
7.0 years
4 - 9 Lacs
Noida
On-site
Role description Position: Process Analyst / Discovery Analyst Experience Required : Minimum 7 years Key Responsibilities Support discovery initiatives for automation projects by gathering documentation, data, and analysis, under the guidance of the Discovery Lead. Collaborate with stakeholders to validate the accuracy and completeness of gathered information, and create artifacts such as current-state process maps and experience maps. Partner with business, technology, and analytics teams to identify, prioritize, and track automation opportunities from idea through execution. Facilitate cross-functional collaboration between business and technology stakeholders, ensuring clear communication and coordination. Assist in solution design and implementation, contributing to process improvements and automation initiatives. Drive automation initiatives end-to-end—from opportunity identification and qualification to preparing detailed cost-benefit analyses (CBAs) and ensuring successful delivery. Contribute actively to the growth of the Intelligent Automation (IA) program, including participation in Proofs of Concept (POCs) and Proofs of Value (POVs) for new IA technologies. Skills & Qualifications Certified in Lean Six Sigma Green Belt , Value Stream Mapping , and/or Process Mapping methodologies. Minimum of 2 years in Automation & Analytics , including working knowledge of RPA tools (such as Blue Prism, UiPath, Automation Anywhere, or PEGA). Hands-on experience with OCR/ICR tools like ABBYY, Kofax, AWS Textract, or Vidado. Familiarity with NLP and chatbot analytics , including tools like Amazon Comprehend. Exposure to machine learning platforms (e.g. Amazon Lex, BigML, IBM Watson Studio) and data visualization tools (e.g. Tableau). Experience with process assessment using IA toolkits . Proven ability to collaborate within large, cross-functional teams aligned with strategic objectives. Strong analytical mindset with attention to detail, coupled with excellent verbal and written communication. Skills Mandatory Skills : Estimation,Architecture Principle Design,RPA - Automation Anywhere,RPA - Blue Prism,RPA - UiPath,Business Architecture,Business Process Design, Mapping,Capacity Planning,EA Frameworks,IDP - ABBYY FlexiCapture,IDP - Automation Anywhere Document Automation,IDP - Azure AI Document Intelligence,IDP - Blue Prism Decipher,IDP - UiPath Document Understanding,PM - Automation Anywhere,PM - Celonis,PM - UiPath,RPA - Microsoft Power Automate,RPA - SAP IRPA,LC - Xceptor,Chatbot - Kore.AI,Automation,IDP - ABBYY Vantage,IDP - Google Doc AI,IDP - Kofax,IDP - LTIMindtree Aspect,PM - Microsoft,PM - Stereologic,RPA - Workfusion About LTIMindtree LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by 86,000+ talented and entrepreneurial professionals across more than 40 countries, LTIMindtree — a Larsen & Toubro Group company — solves the most complex business challenges and delivers transformation at scale. For more information, please visit https://www.ltimindtree.com/. Please also note that neither LTIMindtree nor any of its authorized recruitment agencies/partners charge any candidate registration fee or any other fees from talent (candidates) towards appearing for an interview or securing employment/internship. Candidates shall be solely responsible for verifying the credentials of any agency/consultant that claims to be working with LTIMindtree for recruitment. Please note that anyone who relies on the representations made by fraudulent employment agencies does so at their own risk, and LTIMindtree disclaims any liability in case of loss or damage suffered as a consequence of the same. Recruitment Fraud Alert - https://www.ltimindtree.com/recruitment-fraud-alert/
Posted 1 month ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Description Position: Process Analyst / Discovery Analyst Experience Required : Minimum 7 years Key Responsibilities Support discovery initiatives for automation projects by gathering documentation, data, and analysis, under the guidance of the Discovery Lead. Collaborate with stakeholders to validate the accuracy and completeness of gathered information, and create artifacts such as current-state process maps and experience maps. Partner with business, technology, and analytics teams to identify, prioritize, and track automation opportunities from idea through execution. Facilitate cross-functional collaboration between business and technology stakeholders, ensuring clear communication and coordination. Assist in solution design and implementation, contributing to process improvements and automation initiatives. Drive automation initiatives end-to-end—from opportunity identification and qualification to preparing detailed cost-benefit analyses (CBAs) and ensuring successful delivery. Contribute actively to the growth of the Intelligent Automation (IA) program, including participation in Proofs of Concept (POCs) and Proofs of Value (POVs) for new IA technologies. Skills & Qualifications Certified in Lean Six Sigma Green Belt, Value Stream Mapping, and/or Process Mapping methodologies. Minimum of 2 years in Automation & Analytics, including working knowledge of RPA tools (such as Blue Prism, UiPath, Automation Anywhere, or PEGA). Hands-on experience with OCR/ICR tools like ABBYY, Kofax, AWS Textract, or Vidado. Familiarity with NLP and chatbot analytics, including tools like Amazon Comprehend. Exposure to machine learning platforms (e.g. Amazon Lex, BigML, IBM Watson Studio) and data visualization tools (e.g. Tableau). Experience with process assessment using IA toolkits. Proven ability to collaborate within large, cross-functional teams aligned with strategic objectives. Strong analytical mindset with attention to detail, coupled with excellent verbal and written communication. Skills Mandatory Skills : Estimation,Architecture Principle Design,RPA - Automation Anywhere,RPA - Blue Prism,RPA - UiPath,Business Architecture,Business Process Design, Mapping,Capacity Planning,EA Frameworks,IDP - ABBYY FlexiCapture,IDP - Automation Anywhere Document Automation,IDP - Azure AI Document Intelligence,IDP - Blue Prism Decipher,IDP - UiPath Document Understanding,PM - Automation Anywhere,PM - Celonis,PM - UiPath,RPA - Microsoft Power Automate,RPA - SAP IRPA,LC - Xceptor,Chatbot - Kore.AI,Automation,IDP - ABBYY Vantage,IDP - Google Doc AI,IDP - Kofax,IDP - LTIMindtree Aspect,PM - Microsoft,PM - Stereologic,RPA - Workfusion
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
Key Responsibilities Develop and deploy Generative AI solutions using AWS Bedrock (Knowledge Bases, Agents) and LangChain. Design RAG systems with Amazon Aurora PostgreSQL (Vector DB) and optimize LLM prompts (e.g., Claude 3.5 Sonnet). Implement document processing pipelines using Amazon Textract (OCR), S3 event triggers, and Lambda. Orchestrate multi-agent workflows with AWS Step Functions and EventBridge. Integrate RabbitMQ for asynchronous data handling and ensure data validation with PostgreSQL/MySQL. Collaborate with cross-functional teams to align AI solutions with business needs. Technical Requirements Must-Have: 4+ years in Python (Boto3, Pandas, NumPy) and LangChain/similar frameworks. Strong AWS expertise: Lambda, S3, Textract, Bedrock, Aurora PostgreSQL, Step Functions. Experience with LLM prompt engineering (Claude 3.5, GPT-4) and RAG pipelines. Knowledge of vector databases, OCR, and document processing (financial/legal domains). Location: Remote/Bengaluru/Hyderabad/Pune (India)
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. Our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have onboarded more than 17000 enterprises, 6200 investors, and lenders and facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. Requirements Key Responsibilities: Lead and mentor a dynamic Data Science team in developing scalable, reusable tools and capabilities to advance machine learning models, specializing in computer vision, natural language processing, API development and Product building. Drive innovative solutions for complex CV-NLP challenges, including tasks like image classification, data extraction, text classification, and summarization, leveraging a diverse set of data inputs such as images, documents, and text. Collaborate with cross-functional teams, including DevOps and Data Engineering, to design and implement efficient ML pipelines that facilitate seamless model integration and deployment in production environments. Spearhead the optimization of the model development lifecycle, focusing on scalability for training and production scoring to manage significant data volumes and user traffic. Implement cutting-edge technologies and techniques to enhance model training throughput and response times. Required Experience & Expertise: 3+ years of experience in developing computer vision models and applications. Extensive knowledge and experience in Data Science and Machine Learning techniques, with a proven track record in leading and executing complex projects. Deep understanding of the entire ML model development lifecycle, including design, development, training, testing/evaluation, and deployment, with the ability to guide best practices. Expertise in writing high-quality, reusable code for various stages of model development, including training, testing, and deployment. Advanced proficiency in Python programming, with extensive experience in ML frameworks such as Scikit-learn, TensorFlow, and Keras and API development frameworks such as Django, Fast API. Demonstrated success in overcoming OCR challenges using advanced methodologies and libraries like Tesseract, Keras-OCR, EasyOCR, etc. Proven experience in architecting reusable APIs to integrate OCR capabilities across diverse applications and use cases. Proficiency with public cloud OCR services like AWS Textract, GCP Vision, and Document AI. History of integrating OCR solutions into production systems for efficient text extraction from various media, including images and PDFs. Comprehensive understanding of convolutional neural networks (CNNs) and hands-on experience with deep learning models, such as YOLO. Strong capability to prototype, evaluate, and implement state-of-the-art ML advancements, particularly in OCR and CV-NLP. Extensive experience in NLP tasks, such as Named Entity Recognition (NER), text classification, and on finetuning of Large Language Models (LLMs). This senior role is tailored for visionary professionals eager to push the boundaries of CV-NLP and drive impactful data-driven innovations using both well-established methods and the latest technological advancements. Benefits We are committed to creating a diverse environment and are proud to be an equal-opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.
Posted 2 months ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Join PwC US - Acceleration Center as a Manager of GenAI Data Science to lead innovative projects and drive significant advancements in GenAI solutions. We offer a competitive compensation package, a collaborative work environment, and ample opportunities for professional growth and impact. Years of Experience: Candidates with 8+ years of hands on experience Responsibilities Lead and mentor a team of data scientists in understanding business requirements and applying GenAI technologies to solve complex problems. Oversee the development, implementation, and optimization of machine learning models and algorithms for various GenAI projects. Direct the data preparation process, including data cleaning, preprocessing, and feature engineering, to ensure data quality and readiness for analysis. Collaborate with data engineers and software developers to streamline data processing and integration into machine learning pipelines. Evaluate model performance rigorously using advanced metrics and testing methodologies to ensure robustness and effectiveness. Spearhead the deployment of production-ready machine learning applications, ensuring scalability and reliability. Apply expert programming skills in Python, R, or Scala to develop high-quality software components for data analysis and machine learning. Utilize Kubernetes for efficient container orchestration and deployment of machine learning applications. Design and implement innovative data-driven solutions such as chatbots using the latest GenAI technologies. Communicate complex data insights and recommendations to senior stakeholders through compelling visualizations, reports, and presentations. Lead the adoption of cutting-edge GenAI technologies and methodologies to continuously improve data science practices. Champion knowledge sharing and skill development within the team to foster an environment of continuous learning and innovation. Requirements 8-10 years of relevant experience in data science, with significant expertise in GenAI projects. Advanced programming skills in Python, R, or Scala, and proficiency in machine learning libraries like TensorFlow, PyTorch, or scikit-learn. Extensive experience in data preprocessing, feature engineering, and statistical analysis. Strong knowledge of cloud computing platforms such as AWS, Azure, or Google Cloud, and data visualization techniques. Demonstrated leadership in managing data science teams and projects. Exceptional problem-solving, analytical, and project management skills. Excellent communication and interpersonal skills, with the ability to lead and collaborate effectively in a dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Proven track record of developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations in a corporate setting. Relevant advanced certifications in data science or GenAI technologies. Nice To Have Skills Experience with specific tools such as Azure AI Search, Azure Document Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, and AWS Bedrock. Familiarity with LLM backed agent frameworks like Autogen, Langchain, Semantic Kernel, and experience in chatbot development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 2 months ago
0 years
0 Lacs
India
On-site
Summary As a Senior Software Engineer for FINEOS data and digital products you will be designing and implementing innovative products in AI, ML and data platform. You will collaborate with other Engineers and Architects in FINEOS to deliver data engineering capabilities to integrate AI, ML data products in core AdminSuite platform. Python, microservices and data engineering principles in a native AWS stack are the primary technical skills required to be successful in this position. Responsibilities (Other duties may be assigned.) Product engineering delivery – Translate high level design to smaller components for end-to-end solution delivery. Ability to code and review code of peers to enforce good coding practices, sound data structure choices and efficient methods. Product deployment – Well versed with AWS Devops automation to drive CICD pipelines, unit test, automated integration test, version management and promotion strategy across different environments. Product maintenance – Manage current portfolio of AI, ML data products to ensure timely update of underlying AWS components to ensure products are on current stack and marketable. Education and/or Experience Senior Python engineer with over seven years of experience in successfully developing and deploying, Python cloud-based applications and services. Demonstrated proficiency in delivering scalable applications, optimizing application performance, and ensuring robust security measures. Knowledge, Skills and Abilities Building microservices and event-based applications in serverless architecture. Storing and managing large volumes of data in objects, databases. Continuous Integration/Continuous Deployment (CI/CD) pipelines for automated testing and Deployment. Monitoring and logging tools for application performance and error tracking. Knowledge of best practices for securing AWS resources and data. Proficient in agile development practices. Experience working in large, complex Enterprise solutions with cross geography, cross time zone teams. Proficient in MS Office applications, such as Word, Excel, PowerPoint, etc. Familiar with operating systems, such as Windows, Success Factors, etc. Technical Skills Experience in frameworks and Python libraries such as Flask, Django, Pandas and NumPy. Working with NoSQL databases for high-speed, flexible data storage. Containerization for consistent deployment. Experience in operationalizing ML models in production or building GenAI applications using Textract, Sagemaker, Bedrock Language Skill s Ability to speak the English language proficiently, both verbally and in writing to collaborate with global teams. Travel Requirements This position does not require travel. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Employee works primarily in a home office environment. The home office must be a well-defined work area, separate from normal domestic activity and complete with all essential technology including, but not limited to; separate phone, scanner, printer, computer, etc. as required in order to effectively perform their duties. Work Requirements Compliance with all relevant FINEOS Global policies and procedures related to Quality, Security, Safety, Business Continuity, and Environmental systems. Travel and fieldwork, including international travel may be required. Therefore, employee must possess, or be able to acquire a valid passport. Must be legally eligible to work in the country in which you are hired. FINEOS is an Equal Opportunity Employer. FINEOS does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 months ago
20.0 years
0 Lacs
Sholinganallur, Tamil Nadu, India
On-site
About Us For over 20 years, Smart Data Solutions has been partnering with leading payer organizations to provide automation and technology solutions enabling data standardization and workflow automation. The company brings a comprehensive set of turn-key services to handle all claims and claims-related information regardless of format (paper, fax, electronic), digitizing and normalizing for seamless use by payer clients. Solutions include intelligent data capture, conversion and digitization, mailroom management, comprehensive clearinghouse services and proprietary workflow offerings. SDS’ headquarters are just outside of St. Paul, MN and leverages dedicated onshore and offshore resources as part of its service delivery model. The company counts over 420 healthcare organizations as clients, including multiple Blue Cross Blue Shield state plans, large regional health plans and leading independent TPAs, handling over 500 million transactions of varying types annually with a 98%+ customer retention rate. SDS has also invested meaningfully in automation and machine learning capabilities across its tech-enabled processes to drive scalability and greater internal operating efficiency while also improving client results. SDS recently partnered with a leading growth-oriented investment firm, Parthenon Capital, to further accelerate expansion and product innovation. Location : 6th Floor, Block 4A, Millenia Business Park, Phase II MGR Salai, Kandanchavadi , Perungudi Chennai 600096, India. Smart Data Solutions is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, age, marital status, pregnancy, genetic information, or other legally protected status To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed above are representative of the knowledge skill and or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform essential job functions. Due to access to Protected Healthcare Information, employees in this role must be free of felony convictions on a background check report. Responsibilities Duties and Responsibilities include but are not limited to: Design and build ML pipelines for OCR extraction, document image processing, and text classification tasks. Fine-tune or prompt large language models (LLMs) (e.g., Qwen, GPT, LLaMA , Mistral) for domain-specific use cases. Develop systems to extract structured data from scanned or unstructured documents (PDFs, images, TIFs). Integrate OCR engines (Tesseract, EasyOCR , AWS Textract , etc.) and improve their accuracy via pre-/post-processing. Handle natural language processing (NLP) tasks such as named entity recognition (NER), summarization, classification, and semantic similarity. Collaborate with product managers, data engineers, and backend teams to productionize ML models. Evaluate models using metrics like precision, recall, F1-score, and confusion matrix, and improve model robustness and generalizability. Maintain proper versioning, reproducibility, and monitoring of ML models in production. The duties set forth above are essential job functions for the role. Reasonable accommodations may be made to enable individuals with disabilities to perform essential job functions. Skills And Qualifications 4–5 years of experience in machine learning, NLP, or AI roles Proficiency with Python and ML libraries such as PyTorch , TensorFlow, scikit-learn, Hugging Face Transformers. Experience with LLMs (open-source or proprietary), including fine-tuning or prompt engineering. Solid experience in OCR tools (Tesseract, PaddleOCR , etc.) and document parsing. Strong background in text classification, tokenization, and vectorization techniques (TF-IDF, embeddings, etc.). Knowledge of handling unstructured data (text, scanned images, forms). Familiarity with MLOps tools: MLflow , Docker, Git, and model serving frameworks. Ability to write clean, modular, and production-ready code. Experience working with medical, legal, or financial document processing. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate ) and semantic search. Understanding of document layout analysis (e.g., LayoutLM , Donut, DocTR ). Familiarity with cloud platforms (AWS, GCP, Azure) and deploying models at scale
Posted 2 months ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Clarivate is on the lookout for a Sr. Software Engineer ML (machine learning) to join our Patent Service team in Noida . The successful candidate will be responsible focus on supporting machine learning (ML) projects, for deploying, scaling, and maintaining ML models in production environments, working closely with data scientists, ML engineers, and software developers to architect robust infrastructure, implement automation pipelines, and ensure the reliability and scalability of our ML systems. The ideal candidate should be eager to learn, equipped with strong hands-on technical and analytical thinking skills, have a passion for teamwork, and staying updated with the latest technological trends. About You – Experience, Education, Skills, And Accomplishments Holding a Bachelor's in Engineering or a Master's degree (BE, ME, B.Tech, M.Tech, MCA, MS) with strong communication and reasoning abilities is required. Proven experience as a Machine Learning Engineer or similar position Deep knowledge of math, probability, statistics and algorithms Outstanding analytical and problem-solving skills Understanding of data structures, data modeling and software architecture Good understanding of ML concepts and frameworks (e.g., TensorFlow, Keras, PyTorch) Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas Expertise in Prompt engineering . Expertise in visualizing and manipulating big datasets Working experience for managing ML workload in production Implement and/ or practicing MLOps or LLMOps concepts Additionally, It Would Be Advantageous If You Have Experience in Terraform or similar, and IAC in general. Familiarity with AWS Bedrock. Experience with OCR engines and solutions, e.g. AWS Textract, Google Cloud Vision. Interest in exploring and adopting Data Science methodologies, and AI/ML technologies to optimize project outcomes. Experience working in a CI/CD setup with multiple environments, and with an ability to manage code and deployments towards incrementally faster releases. Experience with RDBMS and NoSQL databases, particularly MySQL or PostgreSQL. What will you be doing in this role? Overall, you will play a pivotal role in driving the success of the development projects and achieving business objectives through innovative and efficient agile software development practices. Designing and developing machine learning systems Implementing appropriate ML algorithms, analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability Running machine learning tests and experiments, perform statistical analysis and fine-tuning using test results, training and retraining systems when necessary Implement monitoring and alerting systems to track the performance and health of ML models in production. Ensure security best practices are followed in the deployment and management of ML systems. Optimize infrastructure for performance, scalability, and cost efficiency. Develop and maintain CI/CD pipelines for automated model training, testing, and deployment. Troubleshoot issues related to infrastructure, deployments, and performance of ML models. Stay up to date with the latest advancements in ML technologies, and evaluate their potential impact on our workflows. About The Team Our team comprises driven professionals who are deeply committed to leveraging technology to make a tangible impact in our field of the patent services area. Joining us, you'll thrive in a multi-region, cross-cultural environment, collaborating on cutting-edge technologies with a strong emphasis on a user-centric approach. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Posted 2 months ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 2 months ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 2 months ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 2 months ago
5.0 - 8.0 years
14 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: Python + Pyspark Location :Bang/Mumbai Exp: 5-8 yrs Best CTC 9783460933 9549198246 9982845569 7665831761 6377522517 7240017049 Team Converse
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |