Jobs
Interviews

121 Textract Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 31.0 years

3 - 9 Lacs

work from home

Remote

Required Skill Sets & Qualifications. 1. Technical Skills (The Builder's Toolkit)Core Programming: Expert proficiency in Python (essential for AI/ML libraries). AI/ML & NLP: Strong hands-on experience with: Large Language Models (LLMs): Practical experience in working with APIs of OpenAI (GPT-4), Google Gemini, Anthropic Claude, or open-source models (LLaMA 2, Mistral). Prompt engineering is a key skill. Frameworks: LangChain, LlamaIndex for building sophisticated agentic workflows. Natural Language Processing (NLP): Libraries like spaCy, NLTK, Hugging Face Transformers. Optical Character Recognition (OCR): Experience with tools like Adobe Extract API, Google Document AI, Amazon Textract, or open-source options (Tesseract) for Indian documents. API Integration: Mastery in connecting various systems via RESTful APIs and webhooks (e.g., connecting a chatbot to a CRM and a document database). Low-Code/No-Code Platforms: Experience with leveraging platforms like Zapier, Make.com, n8n, or Microsoft Power Automate to quickly prototype and connect different SaaS tools is a huge plus. Cloud & DevOps: Experience with cloud platforms (AWS, Google Cloud, Azure) and knowledge of deploying and maintaining AI models (e.g., using AWS SageMaker, Google Vertex AI). Data Security: Understanding of encryption, secure API protocols, and data anonymization techniques crucial for handling sensitive financial data. 3. Soft Skills & Mindset (The Architect)Systems Thinking: Ability to see the entire customer and operational journey and build interconnected agents, not isolated bots. Problem-Scoping & Solutioning: Can break down a complex business problem (e.g., "analyze documents") into a technical workflow (e.g., "trigger -> OCR -> data extraction -> validation -> CRM update"). Agility & Learning: The AI field moves fast. A constant desire to learn and experiment with new tools and models is critical. Communication: Must be able to explain complex AI concepts to non-technical stakeholders (management, loan consultants). Project Management: Ability to manage this large-scale integration project, prioritize tasks, and deliver functional modules. How to Apply. Interested candidates should submit their resume along with a cover letter or portfolio link that must include: Examples of previous AI automation projects you have built. A brief paragraph on how you would approach integrating any two of the systems mentioned above (e.g., connecting a Document Analysis system to a CRM). Any experience specific to the Indian financial sector.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

mumbai, maharashtra, india

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

india

On-site

Responsibilities: Design, train, and deploy custom OCR models using Azure Document Intelligence and AWS Textract Automate document classification, data extraction from structured and unstructured formats (PDFs, scanned images, invoices, etc.) Integrate OCR workflows with Azure Translator for multilingual document processing and AWS Comprehend for NLP-based insights. Utilize AWS Bedrock and Azure OpenAI to build GenAI-powered solutions for semantic understanding and summarization of documents Apply AWS Rekognition for image-based document analysis and identity verification. Develop and maintain scalable pipelines using Python, Java, or .NET for OCR output post-processing (JSON/CSV formats). Collaborate with cross-functional teams to gather requirements and deliver tailored solutions. Ensure compliance with data privacy, security standards, and cloud governance policies. Required Skills: At least 8 years of experience in OCR technologies and intelligent document processing. Proficiency in Azure AI Services, AWS AI/ML stack, and GenAI frameworks. Strong programming skills in Python, Java, or .NET. Experience with Azure Form Recognizer, AWS Textract, Rekognition and Bedrock. Familiarity with EC2,ECS, S3, Lambda, and Step Functions. Knowledge of document layout analysis (tables, forms, key-value pairs). Experience with NLP tools like AWS Comprehend and translation APIs. Understanding of GenAI model training, fine-tuning, and deployment strategies. Exposure to Consumer, Retail and Logistics domain document workflows would be an added advantage. Familiarity with Terraform, DevOps pipelines, and CI/CD practices is an additional benefit. Experience in CI/CD pipelines & Git for code management Ability to quickly learn and develop expertise in existing highly complex applications and architectures Comfortable working in Agile projects Clear and precise communication skills (mandatory since it’s a client facing role) Strong problem-solving, attention to details is important. Bachelor's degree in computer science, information technology, or a similar field (preferable)

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

india

On-site

Responsibilities: Design, train, and deploy custom OCR models using Azure Document Intelligence and AWS Textract Automate document classification, data extraction from structured and unstructured formats (PDFs, scanned images, invoices, etc.) Integrate OCR workflows with Azure Translator for multilingual document processing and AWS Comprehend for NLP-based insights. Utilize AWS Bedrock and Azure OpenAI to build GenAI-powered solutions for semantic understanding and summarization of documents Apply AWS Rekognition for image-based document analysis and identity verification. Develop and maintain scalable pipelines using Python, Java, or .NET for OCR output post-processing (JSON/CSV formats). Collaborate with cross-functional teams to gather requirements and deliver tailored solutions. Ensure compliance with data privacy, security standards, and cloud governance policies. Required Skills: At least 8 years of experience in OCR technologies and intelligent document processing. Proficiency in Azure AI Services, AWS AI/ML stack, and GenAI frameworks. Strong programming skills in Python, Java, or .NET. Experience with Azure Form Recognizer, AWS Textract, Rekognition and Bedrock. Familiarity with EC2,ECS, S3, Lambda, and Step Functions. Knowledge of document layout analysis (tables, forms, key-value pairs). Experience with NLP tools like AWS Comprehend and translation APIs. Understanding of GenAI model training, fine-tuning, and deployment strategies. Exposure to Consumer, Retail and Logistics domain document workflows would be an added advantage. Familiarity with Terraform, DevOps pipelines, and CI/CD practices is an additional benefit. Experience in CI/CD pipelines & Git for code management Ability to quickly learn and develop expertise in existing highly complex applications and architectures Comfortable working in Agile projects Clear and precise communication skills (mandatory since it’s a client facing role) Strong problem-solving, attention to details is important. Bachelor's degree in computer science, information technology, or a similar field (preferable)

Posted 2 weeks ago

Apply

2.0 years

8 - 18 Lacs

delhi

On-site

Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

india

On-site

Responsibilities Design, train, and deploy custom OCR models leveraging Azure Document Intelligence and AWS Textract . Automate document classification and data extraction from structured/unstructured inputs (PDFs, scanned images, invoices, etc.). Build multilingual OCR workflows by integrating Azure Translator and apply NLP insights using AWS Comprehend. Develop GenAI-powered solutions with AWS Bedrock and Azure OpenAI for semantic understanding, summarization, and contextual search of documents. Utilize AWS Rekognition for image-based document analysis and identity verification. Implement and optimize scalable pipelines for OCR output post-processing (JSON/CSV) using Python, Java, or .NET . Collaborate with cross-functional teams to gather requirements and deliver tailored, production-grade solutions. Ensure compliance with data privacy, security, and governance standards across cloud platforms. Required Skills 8+ years of experience in OCR technologies, intelligent document processing, and enterprise-scale AI/ML solutions. Expertise in Azure AI Services (Form Recognizer/Document Intelligence, Translator, OpenAI) and AWS AI/ML stack (Textract, Comprehend, Rekognition, Bedrock). Strong coding skills in Python, Java, or .NET with hands-on experience in post-processing OCR outputs. Familiarity with AWS services (EC2, ECS, S3, Lambda, Step Functions) and Azure cloud environments . Deep knowledge of document layout analysis (tables, forms, key-value pairs). Experience with NLP tools, translation APIs , and GenAI model training, fine-tuning, and deployment . Exposure to Consumer, Retail, and Logistics domain workflows (preferred). Understanding of Terraform, CI/CD pipelines, DevOps practices, and Git (advantage). Comfortable working in Agile projects , with excellent problem-solving and communication skills (client-facing role). Bachelor’s degree in Computer Science, IT, or equivalent (preferred).

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 - 1 Lacs

delhi, delhi

On-site

Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

india

On-site

Responsibilities Design, train, and deploy custom OCR models leveraging Azure Document Intelligence and AWS Textract . Automate document classification and data extraction from structured/unstructured inputs (PDFs, scanned images, invoices, etc.). Build multilingual OCR workflows by integrating Azure Translator and apply NLP insights using AWS Comprehend. Develop GenAI-powered solutions with AWS Bedrock and Azure OpenAI for semantic understanding, summarization, and contextual search of documents. Utilize AWS Rekognition for image-based document analysis and identity verification. Implement and optimize scalable pipelines for OCR output post-processing (JSON/CSV) using Python, Java, or .NET . Collaborate with cross-functional teams to gather requirements and deliver tailored, production-grade solutions. Ensure compliance with data privacy, security, and governance standards across cloud platforms. Required Skills 8+ years of experience in OCR technologies, intelligent document processing, and enterprise-scale AI/ML solutions. Expertise in Azure AI Services (Form Recognizer/Document Intelligence, Translator, OpenAI) and AWS AI/ML stack (Textract, Comprehend, Rekognition, Bedrock). Strong coding skills in Python, Java, or .NET with hands-on experience in post-processing OCR outputs. Familiarity with AWS services (EC2, ECS, S3, Lambda, Step Functions) and Azure cloud environments . Deep knowledge of document layout analysis (tables, forms, key-value pairs). Experience with NLP tools, translation APIs , and GenAI model training, fine-tuning, and deployment . Exposure to Consumer, Retail, and Logistics domain workflows (preferred). Understanding of Terraform, CI/CD pipelines, DevOps practices, and Git (advantage). Comfortable working in Agile projects , with excellent problem-solving and communication skills (client-facing role). Bachelor’s degree in Computer Science, IT, or equivalent (preferred).

Posted 2 weeks ago

Apply

3.0 - 6.0 years

4 - 9 Lacs

gurgaon

On-site

Job Description: Proficient in the following Hands-on experience in various AI/ML techniques, such as deep learning, natural language processing (NLP), large language models (LLM), etc. Develop and implement NLP Solutions for text classification, entity extraction and semantic search using libraries such as NLTK, Spacy, etc. Experience in LangChain with any Gen AI based platform like AWS Bedrock, Azure AI, Google Vertex AI. Extensive experience in at least one cloud platform (e.g. AWS, GCP, Azure) and associated machine learning services, e.g. AWS Comprehend, Textract, SageMaker etc. Experience in working with cross-functional and distributed teams in a global and diverse environment Proven excellent communication, presentation, and interpersonal skills, with the ability to explain complex AI/ML concepts and results to technical and non-technical audiences Proven solid analytical, problem-solving, and decision-making skills, with the ability to balance innovation and pragmatism. Knowledge of RPA Tools - UiPath Additional Advantage - Knowledge in HTML /CSS/JQuery/Angular JS/SP Service/Rest API,PowerShell Scripting Creating and consuming web services, web API, or WCF  Application architecture and design patterns  MySQL  Writing Stored Procedures, triggers, functions, designing DB schema  Experience with C# Coding  Experience with Visual Studio 2013/2015 Knowledge: 3 - 6 Years of experience in analyzing and understanding application storyboards and\or use cases and develop functional application modules Come up with approaches for a given problem statement Define, architect, configure and Implement SharePoint solutions including workflows/automation, applications, Dashboards. Design, build and maintain efficient and reusable C#.net code Must have working knowledge of configuration and custom development of SharePoint components including web parts, event receivers, timer jobs, templates, Features, application pages, site pages, custom list types, site columns, content types, custom workflows and site definitions. Fix identified defects or observations that are potential impacts or risks for the functionality Ensure best possible performance and quality of the application using project and standard best practices Design and develop web user interfaces with SharePoint Forms and Workflows. Develop unit test cases and perform unit testing Work on creating database tables, stored procedures, functions etc Coordinate with AGILE team Strong verbal and written communication skills. Location: This position can be based in any of the following locations: Chennai, Gurgaon Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Role - Quality Assurance Engineer II About The Role GHX’s Intelligent Process Automation (IPA) platform eliminates manual, repetitive work and drive operational excellence. This role will focus on developing automation solutions that digitize business documents received via fax or PDF. You will work with UiPath’s intelligent document understanding, while ensuring the design and implementation are adaptable to future automation platforms. Automation solutions are deployed in AWS Cloud , so cloud familiarity is key. You’ll collaborate closely with senior engineers, business analysts, and product teams to deliver automation that improves accuracy, reduces errors, and boosts productivity. Key Responsibilities Testing & Validation Execute test cases for RPA workflows, document processing pipelines, and API integrations. Validate extracted data from OCR/IDP tools against business requirements. Assist in functional, regression, and UAT testing cycles. Document test results, log defects, and track them to closure. Collaboration Work under the guidance of the QA Lead to ensure adequate test coverage. Collaborate with IPA engineers/Business Analysts to understand workflows and identify edge cases. Support business analysts in reviewing requirements for testability. Quality Support Maintain test data, environment readiness, and test documentation. Participate in Agile ceremonies and contribute to sprint planning with test estimates. Learn and apply QA best practices, tools, and methodologies. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience). 2–5 years of software QA/testing experience; exposure to RPA or intelligent automation is desirable. Basic understanding of RPA tools (UiPath, Automation Anywhere, or similar). Familiarity with test management tools (e.g., Jira, TestRail). Basic knowledge of API testing (Postman, REST) and databases (SQL). Preferred Qualifications Exposure to OCR/IDP tools (UiPath Document Understanding, ABBYY, AWS Textract). Familiarity with AWS services (S3, Lambda, EC2, DynamoDB). Familiarity with OpenSearch and ability to create dashboards to track business metrics Basic scripting skills in Python, JavaScript, or VB.NET for test automation. ISTQB Foundation Level certification or equivalent. Soft Skills Eagerness to learn and work with new technologies. Strong attention to detail and commitment to quality. Good communication and collaboration skills. Proactive attitude with the ability to follow through on tasks. GHX: It's the way you do business in healthcare Global Healthcare Exchange (GHX) enables better patient care and billions in savings for the healthcare community by maximizing automation, efficiency and accuracy of business processes. GHX is a healthcare business and data automation company, empowering healthcare organizations to enable better patient care and maximize industry savings using our world class cloud-based supply chain technology exchange platform, solutions, analytics and services. We bring together healthcare providers and manufacturers and distributors in North America and Europe - who rely on smart, secure healthcare-focused technology and comprehensive data to automate their business processes and make more informed decisions. It is our passion and vision for a more operationally efficient healthcare supply chain, helping organizations reduce - not shift - the cost of doing business, paving the way to delivering patient care more effectively. Together we take more than a billion dollars out of the cost of delivering healthcare every year. GHX is privately owned, operates in the United States, Canada and Europe, and employs more than 1000 people worldwide. Our corporate headquarters is in Colorado, with additional offices in Europe. Disclaimer Global Healthcare Exchange, LLC and its North American subsidiaries (collectively, “GHX”) provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, national origin, sex, sexual orientation, gender identity, religion, age, genetic information, disability, veteran status or any other status protected by applicable law. All qualified applicants will receive consideration for employment without regard to any status protected by applicable law. This EEO policy applies to all terms, conditions, and privileges of employment, including hiring, training and development, promotion, transfer, compensation, benefits, educational assistance, termination, layoffs, social and recreational programs, and retirement. GHX believes that employees should be provided with a working environment which enables each employee to be productive and to work to the best of his or her ability. We do not condone or tolerate an atmosphere of intimidation or harassment based on race, color, national origin, sex, sexual orientation, gender identity, religion, age, genetic information, disability, veteran status or any other status protected by applicable law. GHX expects and requires the cooperation of all employees in maintaining a discrimination and harassment-free atmosphere. Improper interference with the ability of GHX’s employees to perform their expected job duties is absolutely not tolerated. Read our GHX Privacy Policy

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

vadodara, gujarat, india

On-site

About Role We are looking for a skilled Computer Vision Engineer to design and implement cutting-edge CV algorithms and models for real-world applications such as object detection, image classification, video analytics, OCR, and more. You will work on projects involving YOLO, OpenCV, deep learning, and real-time image/video processing. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Experience: 4+ Year Job Location: Vadodara Office Hours: 9.30 to 7.00 Roles and Responsibilities : · 4+ years of experience applying AI to practical uses · Develop and train computer vision models for tasks like: · Object detection and tracking (YOLO, Faster R-CNN, etc.) · Image classification, segmentation, OCR (e.g., PaddleOCR, Tesseract) · Face recognition/blurring, anomaly detection, etc. · Optimize models for performance on edge devices (e.g., NVIDIA Jetson, OpenVINO, TensorRT). · Process and annotate image/video datasets; apply data augmentation techniques. · Proficiency in Large Language Models. · Strong understanding of statistical analysis and machine learning algorithms. · Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. · Understanding of image processing concepts (thresholding, contour detection, transformations, etc.) · Experience in model optimization, quantization, or deploying to edge (Jetson Nano/Xavier, Coral, etc.) · Strong programming skills in Python (or C++), with expertise in: · Implement and optimize machine learning pipelines and workflows for seamless integration into production systems. · Hands-on experience with at least one real-time CV application (e.g., surveillance, retail analytics, industrial inspection, AR/VR). · OpenCV, NumPy, PyTorch/TensorFlow · Computer vision models like YOLOv5/v8, Mask R-CNN, DeepSORT · Engage with multiple teams and contribute on key decisions. · Expected to provide solutions to problems that apply across multiple teams. · Lead the implementation of large language models in AI applications. · Research and apply cutting-edge AI techniques to enhance system performance. · Contribute to the development and deployment of AI solutions across various domains Education/Qualification (if any Certification): Bachelor’s degree in Computer Science, Information Technology, or a related field. Requirements: · Design, develop, and deploy ML models for: · OCR-based text extraction from scanned documents (PDFs, images) · Table and line-item detection in invoices, receipts, and forms · Named entity recognition (NER) and information classification · Evaluate and integrate third-party OCR tools (e.g., Tesseract, Google Vision API, AWS Textract, Azure OCR,PaddleOCR, EasyOCR) · Develop pre-processing and post-processing pipelines for noisy image/text data · Familiarity with video analytics platforms (e.g., DeepStream, Streamlit-based dashboards). · Experience with MLOps tools (MLflow, ONNX, Triton Inference Server). · Background in academic CV research or published papers. · Knowledge of GPU acceleration, CUDA, or hardware integration (cameras, sensors).

Posted 3 weeks ago

Apply

7.0 years

20 - 25 Lacs

india

On-site

Company Overview: ZignaAI is focused on delivering innovative solutions that transform healthcare payment operational processes. We empower payers, providers and patients with AI powered software solution(s) that drive transparency in healthcare payment services. Built-in intelligence enabled machine learning algorithms deliver pre-billing payment accuracy solutions and avoid provider abrasion. We differ from traditional payment services solutions by resolving issues at root by ensuring accurate payments, automating processes, with nudges delivered to billing coders. Our innovative and scalable solutions cover Medicaid, Medicare and Commercial policies and deliver results in weeks. Opportunity Overview: Full-time position, in-office. This position will require the candidate to engage in critical thinking skills, be intellectually curious, and demonstrate great attention to detail. Requires excellent communications skills both written and oral. Must possess the ability to work with a diverse group of talent both in-person and globally. This is a hands-on manager role, that requires the manager to engage in the work of the team being managed. Job Summary: We are seeking a highly skilled and technically sound Data Science Team Manager to lead and drive our data science initiatives. This is a hands-on leadership role, ideal for someone with strong experience in designing and architecting data solutions, streamlining workflows, and directly contributing to team deliverables. This is not a traditional people management role — instead, we’re looking for a technologically proficient leader who thrives on building solutions, mentoring others, and aligning data strategies with business goals. Technical Skills: Hands-on experience building and architecting end-to-end NLP, data science and machine learning solutions using tools such as Python, SQL, and AWS services including EC2, Lambda, Glue, SNS, SQS, Sagemaker, EMR, and Athena. Adapt leveraging these technologies to design scalable, production-ready pipelines and intelligent systems. Ability to Architect the design and deployment of scalable, production-ready machine learning pipelines and intelligent systems. Required Qualifications: 7+ years of professional experience in data science, machine learning, data engineering, or related technical discipline. Proven experience in managing or leading technical teams, with a strong emphasis on mentoring and delivery. Lead by example as a hands-on manager, actively engaging in coding, solution design, and problem-solving alongside the team. Demonstrated experience in designing and implementing complex data architectures and pipelines. Partner with senior leadership to define the organization’s data strategy and ensure alignment with business objectives. Develop and enforce data governance frameworks, policies, and procedures to ensure data quality, compliance, and risk mitigation. Streamline workflows and optimize processes to improve the efficiency, quality, and output of the data science team. Actively mentor and upskill team members, providing knowledge transfer and support for continuous learning and development. Foster cross-functional collaboration by working closely with engineering, product, analytics, and business teams to ensure data solutions are aligned with organizational goals and integrated seamlessly across platforms. Drive a culture of responsibility, ownership, and accountability within the team. Balance multiple concurrent projects while ensuring high-quality outcomes and timely delivery. Maintain a deep understanding of current technologies and industry best practices to ensure the team remains innovative and competitive. Exceptional analytical, problem-solving, and critical thinking skills. Solid understanding of Object-Oriented Programming (OOP) principles with experience in writing clean, modular, and reusable code for scalable data science applications. Proficient in Git for version control, including branching strategies, pull requests, and collaborative code reviews. Hands-on experience using JIRA (or similar tools) for sprint planning, task tracking, and agile workflow management. Strong verbal and written communication skills; capable of interacting effectively with both technical and non-technical stakeholders. Self-motivated, proactive, and driven, with a strong sense of ownership and accountability. Good to Have Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Benefits: Food provided Health insurance Provident Fund Experience: data science, machine learning, data engineering: 7 years (Required) managing or leading technical teams: 5 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Greetings from TCS!!! TCS is hiring for Solutions Architect Role - Solution architect -AI/GenAI/ML (Azure/AWS/Google) Required Technical Skill Set Expertise in designing GenAI architectures including LLM selection, RAG pipelines, vector databases, and integration patterns. GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Desired Experience Range Bachelor’s or Master’s in Computer Science, AI/ML, Engineering, or related field. 10-16 years of experience in solution architecture or AI/ML roles. Location - PAN India Desired Competencies Must-Have Experience architecting AI solutions on at least one cloud platform: Azure: Azure OpenAI, Azure ML, Cognitive Services, Synapse AWS: Bedrock, SageMaker, Comprehend, Textract, Kendra GCP: Vertex AI, PaLM API, LangChain + BigQuery + Looker Hands-on with GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Knowledge of cloud-native architecture, REST APIs, containers (Docker, Kubernetes), and CI/CD. Experience with data privacy, model safety, bias mitigation, and AI governance principles. Good-to-Have Cloud certifications. Experience integrating GenAI into enterprise application Understanding of vector DBs (e.g., Pinecone, Weaviate, Chroma, Qdrant) and embedding models. Familiarity with Guardrails for LLMs, model monitoring, and LLMOps platforms. Responsibility of / Expectations from the Role Architect end-to-end Generative AI solutions for enterprise use cases such as: Agentic solutions, Chatbots and copilots, Knowledge assistants (e.g., RAG), Document summarization, generation, translation, Vision, speech, or code generation models Lead the design and integration of LLM pipelines with cloud-native services (e.g., serverless, containers, APIs). Select and fine-tune foundation models (OpenAI, Claude, Mistral, LLaMA, PaLM, etc.) as needed. Implement retrieval-augmented generation (RAG) using vector databases and hybrid search (e.g., FAISS, Pinecone, Weaviate). Design architectures that ensure scalability, security, and governance for GenAI applications. Build reference implementations, proof of concepts (PoCs), and reusable solution templates. Collaborate with data engineers, MLOps engineers, UI/UX designers, and product teams. Stay current with emerging GenAI trends, tools, models, and patterns.

Posted 3 weeks ago

Apply

0 years

0 Lacs

india

On-site

Summary As a Senior Software Engineer for FINEOS data and digital products you will be designing and implementing innovative products in AI, ML and data platform. You will collaborate with other Engineers and Architects in FINEOS to deliver data engineering capabilities to integrate AI, ML data products in core AdminSuite platform. Python, microservices and data engineering principles in a native AWS stack are the primary technical skills required to be successful in this position. Responsibilities (Other duties may be assigned.) Product engineering delivery – Translate high level design to smaller components for end-to-end solution delivery. Ability to code and review code of peers to enforce good coding practices, sound data structure choices and efficient methods. Product deployment – Well versed with AWS Devops automation to drive CICD pipelines, unit test, automated integration test, version management and promotion strategy across different environments. Product maintenance – Manage current portfolio of AI, ML data products to ensure timely update of underlying AWS components to ensure products are on current stack and marketable. Education and/or Experience Senior Python engineer with over seven years of experience in successfully developing and deploying, Python cloud-based applications and services. Demonstrated proficiency in delivering scalable applications, optimizing application performance, and ensuring robust security measures. Knowledge, Skills and Abilities Building microservices and event-based applications in serverless architecture. Storing and managing large volumes of data in objects, databases. Continuous Integration/Continuous Deployment (CI/CD) pipelines for automated testing and Deployment. Monitoring and logging tools for application performance and error tracking. Knowledge of best practices for securing AWS resources and data. Proficient in agile development practices. Experience working in large, complex Enterprise solutions with cross geography, cross time zone teams. Proficient in MS Office applications, such as Word, Excel, PowerPoint, etc. Familiar with operating systems, such as Windows, Success Factors, etc. Technical Skills Experience in frameworks and Python libraries such as Flask, Django, Pandas and NumPy. Working with NoSQL databases for high-speed, flexible data storage. Containerization for consistent deployment. Experience in operationalizing ML models in production or building GenAI applications using Textract, Sagemaker, Bedrock Language Skill s Ability to speak the English language proficiently, both verbally and in writing to collaborate with global teams. Travel Requirements This position does not require travel. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Employee works primarily in a home office environment. The home office must be a well-defined work area, separate from normal domestic activity and complete with all essential technology including, but not limited to; separate phone, scanner, printer, computer, etc. as required in order to effectively perform their duties. Work Requirements Compliance with all relevant FINEOS Global policies and procedures related to Quality, Security, Safety, Business Continuity, and Environmental systems. Travel and fieldwork, including international travel may be required. Therefore, employee must possess, or be able to acquire a valid passport. Must be legally eligible to work in the country in which you are hired. FINEOS is an Equal Opportunity Employer. FINEOS does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

new delhi, delhi, india

On-site

About Us HexaJar is a new-age technology company working on powerful AI products for businesses. We're looking for smart, curious, and driven freshers who want to learn, build, and grow in the world of AI, automation, and software development. Backend Node.js or Python (FastAPI / Django REST) PostgreSQL (structured data), MongoDB (semi-structured) AWS S3 or GCP Cloud Storage for files Docker, EC2 (Kubernetes later) AI & OCR Google Vision API, AWS Textract, PaddleOCR HuggingFace for document understanding OpenAI GPT-4 / Claude for NL queries & reporting Fuzzy matching (Levenshtein), anomaly detection Frontend React.js / Next.js Tailwind CSS, ShadCN UI Recharts, Tabulator, PDFKit Integrations WhatsApp API (Twilio / Gupshup) Email via SendGrid / AWS SES CSV imports (GST Portal GSTR-2B, Bank statements) Infra & Security AWS / GCP, basic VPC + IAM Auth: JWT + role-based permissions AES-256 data encryption, HTTPS/TLS You’re a Good Fit If You... Are a fresher (0–1 yr exp) or a strong college project contributor Know the basics of Python or Node.js + React Are passionate about AI, automation, and financial systems Are comfortable learning new tools and building from scratch Are a strong team player with good Skills Company: HexaJar Location: On-Site Experience: 0–1 Years Apply at: info@hexajar.com

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

gurgaon

On-site

Job Title: RPA Developer (UiPath / Power Automate) – 3-5 Years Experience Location: Gurugram Experience: 3–5 Years Job Summary: We are seeking a skilled and detail-oriented RPA Developer with 3–5 years of experience in automation development, particularly using UiPath and Power Automate. The ideal candidate will have hands-on experience in automating Oracle ERP processes and document processing workflows. A strong understanding of OCR technologies, file management automation, and enterprise-grade RPA deployment is essential. Key Responsibilities: Design, develop, and deploy RPA solutions using UiPath Studio, Orchestrator, and other UiPath components. Build automation workflows using Power Automate for business process optimization. Automate Oracle ERP processes, especially invoice processing and document workflows. Implement file management and human task automation solutions. Automate Excel-based tasks to ensure data accuracy and operational efficiency. Develop advanced web automation using OCR, image search, Textract, keystrokes, and mouse click automation. Configure and maintain UiPath environments across development, testing, and production. Extract data from handwritten documents, forms, tables, invoices, receipts, and scanned images using tools like Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Collaborate with cross-functional teams to identify automation opportunities and deliver scalable solutions. Ensure high-quality documentation and adherence to best practices in RPA development. Required Skills & Qualifications: 3–5 years of hands-on experience in RPA development using UiPath and Power Automate. Proficiency in Oracle ERP systems, especially in invoice and document processing. Strong knowledge of document processing tools and OCR technologies. Experience with Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Expertise in Excel automation and web-based automation techniques. Solid understanding of RPA deployment, monitoring, and maintenance. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Preferred Qualifications: UiPath RPA Developer Certification. Experience in enterprise-level RPA deployments. Familiarity with Agile/Scrum methodologies. Globally, our policy is to recruit individuals from wide and diverse backgrounds. However, certain positions require access to controlled goods and technologies subject to the International Traffic in Arms Regulations (ITAR) or Export Administration Regulations (EAR). Applicants for these positions may need to be “U.S. persons.” “U.S. persons” are generally defined as U.S. citizens, noncitizen nationals, lawful permanent residents (or, green card holders), individuals granted asylum, and individuals admitted as refugees. MKS Inc. and its affiliates and subsidiaries (“MKS”) is an affirmative action and equal opportunity employer: diverse candidates are encouraged to apply. We win as a team and are committed to recruiting and hiring qualified applicants regardless of race, color, national origin, sex (including pregnancy and pregnancy-related conditions), religion, age, ancestry, physical or mental disability or handicap, marital status, membership in the uniformed services, veteran status, sexual orientation, gender identity or expression, genetic information, or any other category protected by applicable law. Hiring decisions are based on merit, qualifications and business needs. We conduct background checks and drug screens, in accordance with applicable law and company policies. MKS is generally only hiring candidates who reside in states where we are registered to do business. MKS is committed to working with and providing reasonable accommodations to qualified individuals with disabilities. If you need a reasonable accommodation during the application or interview process due to a disability, please contact us at: accommodationsatMKS@mksinst.com . If applying for a specific job, please include the requisition number (ex: RXXXX), the title and location of the role

Posted 4 weeks ago

Apply

17.0 years

0 Lacs

ambattur, tamil nadu, india

On-site

About Us Cognet HRO is a leading Business Process Outsourcing Services Company providing full range of HR and F&A services to US based clients , With over 17 years of rich experience in Payroll Tax, Benefits S & HR Administration, Finance & Accounting , Sales Support . CogNet has been serving the PEO, ASO, HRO and HR Technology spaces since our inception. We help organizations extend their capabilities through simplified implementation, productivity performance measured to the minute, easy collaboration, and transparent pricing built around real time utilization. Our extensive expertise, data library, and workflow development tools accelerates the client implementation process .We have developed a deep expertise of process and technology in our Services , which allows us to rapidly deliver value to our clients. Cognet has been delivering outsourced solutions to the clients around the Globe. Job Description Job Description: Software Engineer – AI Developer (Amazon Bedrock /Agentic AI) Overview We are looking for a talented Amazon Bedrock Developer to design, build, and deploy AI-powered applications using AWS Bedrock , Textract , Guardrails , and Agents . The ideal candidate will have hands-on experience integrating foundation models with document processing, compliance frameworks, and autonomous agents to deliver secure and intelligent enterprise solutions. This role is perfect for someone passionate about building responsible AI applications in the cloud while ensuring scalability, performance, and regulatory compliance. Preferred Qualifications Experience with orchestration frameworks such as LangChain or Haystack. Knowledge of vector databases (OpenSearch, Pinecone, Weaviate) for retrieval-augmented generation (RAG). Previous experience building AI copilots, assistants, or chatbots. AWS Certifications (e.g., Machine Learning Specialty, Solutions Architect Associate/Professional). Background in document intelligence or NLP applications. Requirements AI Application Development: Build and optimize generative AI applications leveraging Amazon Bedrock foundation models for text, conversation, and semantic search. Document Intelligence: Integrate Amazon Textract with Bedrock to extract structured data from unstructured documents for downstream processing. Responsible AI: Implement Amazon Bedrock Guardrails to enforce content moderation, compliance, and safety policies across generative AI outputs. Autonomous Agents: Design and deploy Bedrock Agents to handle multi-step reasoning, orchestration, and integration with enterprise systems. API & Workflow Integration: Connect Bedrock applications with internal APIs, third-party services, and data pipelines for real-time insights. Scalability & Optimization: Monitor performance, optimize costs, and ensure high availability of Bedrock-powered services. Security & Compliance: Apply AWS best practices for IAM, encryption, and monitoring to safeguard sensitive data and AI-driven workflows. Cross-functional Collaboration: Work with data scientists, ML engineers, and DevOps teams to deliver end-to-end solutions. Continuous Innovation: Stay current with new AWS AI/ML services and emerging best practices in generative AI. Proven experience with AWS Bedrock (foundation models, Guardrails, Agents). Hands-on expertise with Amazon Textract for document extraction and processing. Proficiency in C#, Python or Node.js for backend and AI workflow development. Strong knowledge of AWS services (Lambda, API Gateway, S3, DynamoDB, CloudWatch). Experience in prompt engineering and model orchestration. Familiarity with REST APIs and event-driven architectures. Understanding of AI security and compliance practices. Benefits Work with cutting-edge AWS generative AI technologies (Bedrock, Guardrails, Agents, Textract). Drive real-world AI adoption in enterprise applications. Competitive salary, flexible working arrangements, and career growth in AI/ML. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Job Description: GenAI Data Scientist - Senior Associate (PwC US AC) PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. If you are passionate about GenAI technologies and have a proven track record in data science, join PwC US - Acceleration Center and be part of a dynamic team that is shaping the future of GenAI solutions. We offer a collaborative and innovative work environment where you can make a significant impact.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: • Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena • Process and integrate structured and unstructured data, including sensor/IoT and real-time streams • Optimize pipeline performance and ensure reliability and fault tolerance • Collaborate with cross-functional teams including data scientists and analysts • Perform data transformations using Python, Pandas, and SQL • Maintain data integrity, quality, and security across the platform • Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation • Support and monitor pipeline workflows, troubleshoot issues, and implement fixes • Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field • 3+ years of experience in data engineering using AWS • Strong skills in: o AWS Glue, Redshift, S3, Lambda, EMR, Athena o Python, Pandas, SQL o RDS, Postgres, SAP HANA • Solid understanding of data modeling, warehousing, and pipeline orchestration • Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: • Experience working with energy sector data or IoT/sensor-based data • Exposure to machine learning tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) • Familiarity with big data technologies like Apache Spark, Kafka • Experience with data visualization tools (Tableau, Power BI, AWS QuickSight) • Awareness of data governance and catalog tools such as AWS Data Quality, Collibra, and AWS Databrew • AWS Certifications (Data Analytics, Solutions Architect)

Posted 1 month ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

Greetings from TCS!!! Come and join us for an exciting career with TCS!!! Role: Solution architect -AI/genai/ML Desired Experience Range: 10 - 16 years Location: PAN INDIA Must Have Experiences and Skills: Experience architecting AI solutions on at least one cloud platform: Azure: Azure OpenAI, Azure ML, Cognitive Services, Synapse AWS: Bedrock, SageMaker, Comprehend, Textract, Kendra GCP: Vertex AI, PaLM API, LangChain + BigQuery + Looker Hands-on with GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Knowledge of cloud-native architecture, REST APIs, containers (Docker, Kubernetes), and CI/CD. Experience with data privacy, model safety, bias mitigation, and AI governance principles.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from TCS!!! Come and join us for an exciting career with TCS!!! Role: Solution architect -AI/genai/ML Desired Experience Range: 10 - 16 years Location: PAN INDIA Must Have Experiences and Skills: Experience architecting AI solutions on at least one cloud platform: Azure: Azure OpenAI, Azure ML, Cognitive Services, Synapse AWS: Bedrock, SageMaker, Comprehend, Textract, Kendra GCP: Vertex AI, PaLM API, LangChain + BigQuery + Looker Hands-on with GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Knowledge of cloud-native architecture, REST APIs, containers (Docker, Kubernetes), and CI/CD. Experience with data privacy, model safety, bias mitigation, and AI governance principles.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon

On-site

Job Title: RPA Developer (UiPath / Power Automate) – 3-5 Years Experience Location: Gurugram Experience: 3–5 Years Job Summary: We are seeking a skilled and detail-oriented RPA Developer with 3–5 years of experience in automation development, particularly using UiPath and Power Automate. The ideal candidate will have hands-on experience in automating Oracle ERP processes and document processing workflows. A strong understanding of OCR technologies, file management automation, and enterprise-grade RPA deployment is essential. Key Responsibilities: Design, develop, and deploy RPA solutions using UiPath Studio, Orchestrator, and other UiPath components. Build automation workflows using Power Automate for business process optimization. Automate Oracle ERP processes, especially invoice processing and document workflows. Implement file management and human task automation solutions. Automate Excel-based tasks to ensure data accuracy and operational efficiency. Develop advanced web automation using OCR, image search, Textract, keystrokes, and mouse click automation. Configure and maintain UiPath environments across development, testing, and production. Extract data from handwritten documents, forms, tables, invoices, receipts, and scanned images using tools like Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Collaborate with cross-functional teams to identify automation opportunities and deliver scalable solutions. Ensure high-quality documentation and adherence to best practices in RPA development. Required Skills & Qualifications: 3–5 years of hands-on experience in RPA development using UiPath and Power Automate. Proficiency in Oracle ERP systems, especially in invoice and document processing. Strong knowledge of document processing tools and OCR technologies. Experience with Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Expertise in Excel automation and web-based automation techniques. Solid understanding of RPA deployment, monitoring, and maintenance. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Preferred Qualifications: UiPath RPA Developer Certification. Experience in enterprise-level RPA deployments. Familiarity with Agile/Scrum methodologies. Globally, our policy is to recruit individuals from wide and diverse backgrounds. However, certain positions require access to controlled goods and technologies subject to the International Traffic in Arms Regulations (ITAR) or Export Administration Regulations (EAR). Applicants for these positions may need to be “U.S. persons.” “U.S. persons” are generally defined as U.S. citizens, noncitizen nationals, lawful permanent residents (or, green card holders), individuals granted asylum, and individuals admitted as refugees. MKS Instruments, Inc. and its affiliates and subsidiaries (“MKS”) is an affirmative action and equal opportunity employer: diverse candidates are encouraged to apply. We win as a team and are committed to recruiting and hiring qualified applicants regardless of race, color, national origin, sex (including pregnancy and pregnancy-related conditions), religion, age, ancestry, physical or mental disability or handicap, marital status, membership in the uniformed services, veteran status, sexual orientation, gender identity or expression, genetic information, or any other category protected by applicable law. Hiring decisions are based on merit, qualifications and business needs. We conduct background checks and drug screens, in accordance with applicable law and company policies. MKS is generally only hiring candidates who reside in states where we are registered to do business. MKS is committed to working with and providing reasonable accommodations to qualified individuals with disabilities. If you need a reasonable accommodation during the application or interview process due to a disability, please contact us at: accommodationsatMKS@mksinst.com . If applying for a specific job, please include the requisition number (ex: RXXXX), the title and location of the role

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies