Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 - 0 Lacs
delhi, delhi
On-site
Hello, We are looking for AI/ML Engineers / Interns who can handle end-to-end data-to-AI pipeline. Responsibilities: - Convert scanned legal PDFs into clean, structured text using OCR tools (Tesseract, PaddleOCR, DocTR). - Perform data cleaning, preprocessing, and chunking (300–600 tokens with overlap, section-wise). - Generate embeddings and build Vector DB indexes (FAISS / Qdrant / Weaviate). - Implement RAG pipelines (LangChain / LlamaIndex) to connect legal data with GPT models (OpenAI GPT-4o Mini / GPT-4.1 Mini). - Ensure source-grounded answers with Act/Section citations and prevent hallucinations. - Evaluate system performance (Recall@k, latency, faithfulness, hallucination rate). - Work with legal researchers to align AI output with real law practices. Required Skills: - Strong Python (Pandas, Regex, JSON handling). - Experience with OCR tools (Tesseract, PaddleOCR, pdfplumber, PyMuPDF). - Knowledge of NLP basics (tokenization, embeddings, transformers). - Hands-on with Vector DBs (FAISS, Qdrant, Weaviate). - Familiarity with LangChain / LlamaIndex for RAG. - OpenAI API integration (prompting, structured outputs). - Basic knowledge of Git + Docker for deployment. Location : NCR/Delhi. Best Regards, Job Type: Full-time Pay: ₹50,000.00 - ₹90,000.00 per month Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
india
On-site
Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.
Posted 1 week ago
2.0 years
8 - 18 Lacs
delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 1 week ago
0 years
2 - 9 Lacs
hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Ability to develop and implement ML. algorithms. Ability to Create Python library and/or modify an existing one. Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress. Analysing the algorithms that could be used to solve a given problem. Analysing the errors of the scripts and designing strategies to overcome them Managing Version control and Code repository Requirements To be successful in this role, you should meet the following requirements: Hands on experience of Python, Pandas, NumPy, SciPy, Keras, apaCy, NLTK Experience of Open Source OCR libraries Tesseract, OpenCV, PYTesseract Knowledge of Pre-trained OCR models, Optimization of Pre-trained modals Hands on experience on Tools (Ex-Visual Studio Code or PyCharm or Jupiter or Anaconda) Strong written & verbal Communication skills. Good judgement, sense of urgency and accountability; time management organizational and problem-solving skills. Ability to develop OOP solutions in Python and create and modify the HSBC internal python libraries. Desired Skills: Good knowledge of Database and SQL Good knowledge of DevOps , CICD , Docker and Kubernetes Added advantage of cloud skills GCP or AWS You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
0.0 - 5.0 years
0 - 1 Lacs
delhi, delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 1 week ago
3.0 - 8.0 years
5 - 9 Lacs
hyderabad
Work from Office
Job Title: Automation Engineer - AI/ML & Automation Experience : 3+ Years Notice Period : Immediate to 20 days, On-Site Role Description : - We are looking for a highly skilled and communicative Automation Engineer based in Hyderabad with a strong background in AI/ML, OCR projects, and Python-based automation The ideal candidate will have experience in building intelligent data solutions using machine learning techniques and automation tools such as Power Automate and Selenium (Python-based) Strong problem-solving skills and the ability to communicate effectively with technical and non-technical stakeholders are essential for this role Key Responsibilities Design and implement AI/ML models for document processing, classification, and prediction tasks using Python Develop and deploy OCR-based solutions using tools like Tesseract, Google Vision API, or similar Build and maintain automated workflows using Power Automate to streamline business processes Automate data collection and testing processes using Selenium with Python Collaborate with data scientists, business analysts, and product teams to define use cases and deliver ML-powered solutions Write clean, efficient, and reusable code and ensure best practices in automation and model deployment Communicate technical concepts clearly with internal teams and stakeholders Required Skills & Qualifications 3+ years of experience in data engineering, machine learning, or automation roles Strong programming skills in Python, with experience in ML libraries such as scikit-learn, TensorFlow, or PyTorch Hands-on experience with OCR technologies (Tesseract, Azure OCR, Google Vision, or AWS Textract ) Experience with Power Automate for workflow automation Proficiency in Selenium with Python for web automation and testing Strong verbal and written communication skills Bachelor's or Master's degree in Computer Science, Engineering, or related field Good to Have Experience working with cloud platforms like Azure or AWS Exposure to RPA frameworks or low-code platforms Familiarity with APIs and integration of third-party AI tools
Posted 1 week ago
3.0 years
0 Lacs
mumbai metropolitan region
On-site
Job Description Job Title: Automation Engineer � AI/ML & Automation Location: Hyderabad, India Experience: 3+ Years Notice Period � Immediate to 20 days, On-Site Job Description We are looking for a highly skilled and communicative Automation Engineer based in Hyderabad with a strong background in AI/ML , OCR projects , and Python-based automation . The ideal candidate will have experience in building intelligent data solutions using machine learning techniques and automation tools such as Power Automate and Selenium (Python-based). Strong problem-solving skills and the ability to communicate effectively with technical and non-technical stakeholders are essential for this role. Key Responsibilities Design and implement AI/ML models for document processing, classification, and prediction tasks using Python. Develop and deploy OCR-based solutions using tools like Tesseract, Google Vision API, or similar. Build and maintain automated workflows using Power Automate to streamline business processes. Automate data collection and testing processes using Selenium with Python. Collaborate with data scientists, business analysts, and product teams to define use cases and deliver ML-powered solutions. Write clean, efficient, and reusable code and ensure best practices in automation and model deployment. Communicate technical concepts clearly with internal teams and stakeholders. Required Skills & Qualifications 3+ years of experience in data engineering, machine learning, or automation roles. Strong programming skills in Python, with experience in ML libraries such as scikit-learn, TensorFlow, or PyTorch. Hands-on experience with OCR technologies (Tesseract, Azure OCR, Google Vision, or AWS Textract). Experience with Power Automate for workflow automation. Proficiency in Selenium with Python for web automation and testing. Strong verbal and written communication skills. Bachelor�s or Master�s degree in Computer Science, Engineering, or related field. Good to Have Experience working with cloud platforms like Azure or AWS. Exposure to RPA frameworks or low-code platforms. Familiarity with APIs and integration of third-party AI tools. Skills Required RoleAutomation AI/ML Engineer Industry TypeIT Services & Consulting Functional Area Required Education BE, B Tech, BCA Employment TypeFull Time, Permanent Key Skills AI /ML PYTHON AUTOMATION Other Information Job CodeGO/JC/966/2025 Recruiter NameChristopher
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Senior Programmer Analyst position is at an intermediate level and involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your primary objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation. Additionally, you will provide user and operational support on applications to business users. You are expected to utilize your in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business and system processes, and industry standards, and make evaluative judgments. It will be your responsibility to recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. You will also consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. As an Applications Development Senior Programmer Analyst, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You will have the ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. In making business decisions, you will appropriately assess risk and demonstrate particular consideration for the firm's reputation while safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment regarding personal behavior, conduct, and business practices, as well as escalating, managing, and reporting control issues with transparency. Qualifications for this role include 5-8 years of relevant experience, expertise in systems analysis and programming of software applications written in C#, ASP.NET, Angular, and SQL Server, experience with multiple OCR technologies such as Tesseract, Python, Kofax, and Spacy, as well as experience in managing and implementing successful projects. You should also have a working knowledge of consulting/project management techniques/methods and the ability to work under pressure, manage deadlines, or handle unexpected changes in expectations or requirements. Education required for this position is a Bachelor's degree/University degree or equivalent experience. Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a Document Extraction and Inference Engineer with proficiency in traditional machine learning algorithms and rule-based NLP techniques. As the ideal candidate, you will possess a solid background in document processing, structured data extraction, and inference modeling through classical ML methods. Your primary responsibility will involve designing, implementing, and enhancing document extraction pipelines for diverse applications to ensure both accuracy and efficiency. Your key responsibilities will include developing and executing document parsing and structured data extraction techniques, leveraging OCR and pattern-based NLP for text extraction, refining rule-based and statistical models for document classification and entity recognition, creating feature engineering strategies to enhance inference accuracy, handling structured and semi-structured data such as PDFs, scanned documents, XML, and JSON, implementing knowledge-based inference models for decision-making purposes, collaborating with data engineers to construct scalable document processing pipelines, performing error analysis, and enhancing extraction accuracy through iterative refinements. Additionally, you will be required to stay abreast of the latest advancements in traditional NLP and document processing techniques. To qualify for this role, you must hold a Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field, accompanied by a minimum of 3 years of experience in document extraction and inference modeling. Proficiency in Python and ML libraries like Scikit-learn, NLTK, OpenCV, and Tesseract is essential, along with expertise in OCR technologies, regular expressions, and rule-based NLP. You should also have experience with SQL and database management for handling extracted data, knowledge of probabilistic models, optimization techniques, and statistical inference, familiarity with cloud-based document processing tools such as AWS Textract and Azure Form Recognizer, as well as strong analytical and problem-solving skills. Preferred qualifications for this role include experience in graph-based document analysis and knowledge graphs, knowledge of time series analysis for document-based forecasting, exposure to reinforcement learning for adaptive document processing, and an understanding of the credit/loan processing domain. This position is based in Chennai, India.,
Posted 2 weeks ago
0 years
0 Lacs
ahmedabad, gujarat, india
On-site
We’re Hiring: Python Intern 📍 Location: Ahmedabad, Gujarat 💰 Stipend: ₹3,000 / month ⏳ Duration: 6 Months At Tesseract Technolabs , we are passionate about solving real-world problems with AI, Data Science, and Process Automation . We are looking for a Python Intern who is eager to learn, build, and grow with us. 🔹 What You’ll Work On Developing and maintaining applications using Python & Django Exploring Odoo framework for enterprise solutions Working with AI & Data Science libraries (Pandas, NumPy, Scikit-learn, TensorFlow, etc.) Contributing to live client projects and internal automation tools 🔹 What We’re Looking For Strong foundation in Python programming Basic knowledge of web frameworks (Django/Flask) Familiarity with databases (MySQL/PostgreSQL) Curiosity to learn AI/ML concepts and apply them to real-world use cases Good problem-solving skills and willingness to work in a collaborative environment 🔹 What You’ll Gain Hands-on experience with cutting-edge AI & ML projects Exposure to real client projects and enterprise systems Mentorship from industry professionals A chance to kickstart your career in Python development & AI 📩 Interested candidates can apply by sending their resume to hr@tesseracttechnolabs.com or DM us here on LinkedIn. Join us at Tesseract Technolabs and be part of our mission to build intelligent, automated, and scalable solutions! 🚀 #Python #Django #Odoo #AI #DataScience #Internship #Hiring #Ahmedabad
Posted 2 weeks ago
3.0 years
0 Lacs
india
On-site
Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.
Posted 2 weeks ago
4.0 years
0 Lacs
india
Remote
Senior RPA Developer – UiPath 📍 Location: Bangalore - Remote About the Role We are seeking an experienced Senior RPA Developer to design and implement enterprise-grade automation solutions using UiPath. The ideal candidate will have strong expertise in RPA frameworks, intelligent document processing, and database integrations, with the ability to deliver scalable and robust solutions. Key Responsibilities Design, develop, test, and deploy RPA solutions using UiPath Studio, Orchestrator, and REFramework . Implement UiPath Document Understanding with OCR-based extraction for structured, semi-structured, and unstructured documents. Build and integrate machine learning models via UiPath AI Center into automation workflows. Develop and optimize SQL queries, stored procedures, and database-driven automations . Configure and manage Orchestrator assets, queues, triggers, and robots. Create reusable components and templates for scalable automation. Implement error handling, logging, and exception management strategies. Collaborate with business analysts and stakeholders to gather requirements and translate them into technical designs. Provide post-deployment support, performance monitoring, and workflow enhancements. Required Skills & Qualifications Bachelor’s degree in Computer Science, IT, Engineering, or related field ( Master’s degree preferred ). 4+ years of software development experience with focus on RPA, automation, or scripting. Strong expertise in UiPath Studio, REFramework, and Orchestrator . Proficient in UiPath Document Understanding (Document Manager, Taxonomy Manager, Data Extraction, Validation Station). Experience with OCR tools (OmniPage, Tesseract, Google Vision OCR, etc.). Solid knowledge of UiPath AI Center (data labeling, model training, ML skill deployment). Proficiency in SQL : complex queries, joins, aggregations, schema design. Strong understanding of API integrations (REST/SOAP, OAuth, API keys). Familiarity with Git and best practices in scalable RPA design. Ability to prepare and maintain technical documentation (SDDs, PDDs, user manuals). Strong problem-solving, debugging, and analytical skills. Excellent communication and collaboration skills, with Agile/Scrum experience. Why Join Us? Opportunity to work on cutting-edge automation projects . Collaborative and growth-oriented work culture. Exposure to enterprise clients and advanced RPA solutions. 👉 If you are passionate about automation and want to drive digital transformation, apply now!
Posted 2 weeks ago
0.0 - 31.0 years
3 - 9 Lacs
work from home
Remote
Required Skill Sets & Qualifications. 1. Technical Skills (The Builder's Toolkit)Core Programming: Expert proficiency in Python (essential for AI/ML libraries). AI/ML & NLP: Strong hands-on experience with: Large Language Models (LLMs): Practical experience in working with APIs of OpenAI (GPT-4), Google Gemini, Anthropic Claude, or open-source models (LLaMA 2, Mistral). Prompt engineering is a key skill. Frameworks: LangChain, LlamaIndex for building sophisticated agentic workflows. Natural Language Processing (NLP): Libraries like spaCy, NLTK, Hugging Face Transformers. Optical Character Recognition (OCR): Experience with tools like Adobe Extract API, Google Document AI, Amazon Textract, or open-source options (Tesseract) for Indian documents. API Integration: Mastery in connecting various systems via RESTful APIs and webhooks (e.g., connecting a chatbot to a CRM and a document database). Low-Code/No-Code Platforms: Experience with leveraging platforms like Zapier, Make.com, n8n, or Microsoft Power Automate to quickly prototype and connect different SaaS tools is a huge plus. Cloud & DevOps: Experience with cloud platforms (AWS, Google Cloud, Azure) and knowledge of deploying and maintaining AI models (e.g., using AWS SageMaker, Google Vertex AI). Data Security: Understanding of encryption, secure API protocols, and data anonymization techniques crucial for handling sensitive financial data. 3. Soft Skills & Mindset (The Architect)Systems Thinking: Ability to see the entire customer and operational journey and build interconnected agents, not isolated bots. Problem-Scoping & Solutioning: Can break down a complex business problem (e.g., "analyze documents") into a technical workflow (e.g., "trigger -> OCR -> data extraction -> validation -> CRM update"). Agility & Learning: The AI field moves fast. A constant desire to learn and experiment with new tools and models is critical. Communication: Must be able to explain complex AI concepts to non-technical stakeholders (management, loan consultants). Project Management: Ability to manage this large-scale integration project, prioritize tasks, and deliver functional modules. How to Apply. Interested candidates should submit their resume along with a cover letter or portfolio link that must include: Examples of previous AI automation projects you have built. A brief paragraph on how you would approach integrating any two of the systems mentioned above (e.g., connecting a Document Analysis system to a CRM). Any experience specific to the Indian financial sector.
Posted 2 weeks ago
3.0 years
0 Lacs
india
On-site
Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.
Posted 2 weeks ago
4.0 years
0 Lacs
navi mumbai, maharashtra, india
On-site
About IRIS IRIS Business Services Limited (IRIS) is a leading regtech SaaS provider listed on both the BSE and NSE. Established in 2000, IRIS empowers over 30 regulators and 6,000 enterprises across 54+ countries, positively impacting more than 2 billion lives. Our innovative solutions transform regulatory compliance into a competitive business advantage. Headquartered in Mumbai, IRIS operates subsidiaries in the USA, Singapore, Malaysia, and Italy, with an affiliate in the UAE. IRIS is also a proud member of XBRL jurisdictions worldwide, including XBRL International, India, Europe, South Africa, and the USA. In India, IRIS Is An Authorized GST Suvidha Provider And a Private Invoice Registration Portal. Our Commitment To Digital Innovation Has Earned Us Numerous Accolades, Such As To read more about IRIS visit our website: www.irisbusiness.com Key Responsibilities Develop and deploy AI/ML models for document understanding, including: Text extraction, section classification, and table detection from PDFs Semantic similarity and concept mapping to XBRL taxonomies Build and fine-tune NLP models (transformers, embeddings, entity recognition) for financial texts Collaborate with product and data engineering teams to integrate AI pipelines with ETL and tagging engines Implement human-in-the-loop workflows for model feedback and active learning Optimize model performance across diverse document formats and industries (MFRS, IFRS, GRI, etc.) Track model metrics, validate outputs, and handle retraining and continuous learning cycles Maintain clean, modular, and reusable code using MLOps best practices (versioning, reproducibility, CI/CD) Required Skills & Qualifications 1–4 years of hands-on experience in AI/ML model development and deployment Strong programming skills in Python, with experience in libraries like scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, OpenAI Proven experience with NLP tasks: classification, NER, information extraction, embeddings Experience with PDF and document parsing tools: PDFMiner, LayoutLM, Camelot, Tesseract, etc. Experience with cloud platforms, particularly Azure (Azure Machine Learning, Azure AI Foundry, Azure Form Recognizer, Azure Cognitive Services) and optionally AWS/GCP Experience with model serving and deployment in production environments (e.g. Docker, Azure Kubernetes Service) Working knowledge of MLOps frameworks (MLflow, DVC, Airflow, etc.) Exposure to XBRL, XML, or structured financial reporting formats is a strong advantage Strong problem-solving and analytical skills with attention to detail Exposure to Large Language Models (LLMs) & Retrieval-Augmented Generation (RAG) Experience in using openpyxl in python Familiarity with Office JS integration Good to Have Familiarity with financial statement structures and accounting terminology Experience working in regulatory technology, compliance platforms, or SupTech Understanding of XBRL taxonomies (IFRS, MFRS, GRI, SEC) Educational Qualifications Bachelor’s or master’s degree in: Computer Science / Engineering Mathematics, Statistics or related quantitative fields Awards won by IRIS Won recognition as Indias best Fintech at the Financial Express Best Banks Awards. an award that was presented to our CEO by Smt Nirmala Sitharaman, Finance Minister, Govt of India. IRIS has been selected as the Best Tax Technology Service Provider 2022 in category National Taxation Awards at the prestigious TIOL Awards. IRIS CARBON has won The Most Agile/Responsive SaaS Solution of the Year award at the 2022 SaaS Awards by Awarding and Consultancy International. At IRIS CARBON, we are committed to creating a diverse and inclusive environment. We are an equal opportunity employer and welcome applicants from all backgrounds.
Posted 2 weeks ago
3.0 years
0 Lacs
gurugram, haryana, india
On-site
Roles and Responsibilities Build and maintain scalable, fault-tolerant data pipelines to support GenAI and analytics workloads across OCR, documents, and case data. Manage ingestion and transformation of semi-structured legal documents (PDF, Word, Excel) into structured formats. Enable RAG workflows by processing data into chunked, vectorized formats with metadata. Handle large-scale ingestion from multiple sources into cloud-native data lakes (S3, GCS), data warehouses (BigQuery, Snowflake), and PostgreSQL. Automate pipelines using orchestration tools like Airflow/Prefect , including retry logic, alerting, and metadata tracking. Collaborate with ML Engineers to ensure data availability, traceability, and performance for inference and training pipelines. Implement data validation and testing frameworks using Great Expectations or dbt . Integrate OCR pipelines and post-processing outputs for embedding and document search. Design infrastructure for streaming vs batch data needs and optimize for cost, latency, and reliability. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or equivalent. 3+ years of experience in building distributed data pipelines and managing multi-source ingestion. Proficiency with Python , SQL , and data tools like Pandas, PySpark. Experience working with data orchestration tools (Airflow, Prefect), and file formats like Parquet, Avro, JSON. Hands-on experience with cloud storage/data warehouse systems (S3, GCS, BigQuery, Redshift). Understanding of GenAI and vector database ingestion pipelines is a strong plus. Bonus: Experience with OCR tools (Tesseract, Google Document AI), PDF parsing libraries (PyMuPDF), and API-based document processors.
Posted 2 weeks ago
5.0 years
0 Lacs
pune, maharashtra, india
Remote
Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore www.veradigm.com. Job Summary: We are looking for a talented and motivated Software Engineer with proven expertise in Robotic Process Automation (RPA) using Automation Anywhere and UiPath. This role focuses on developing secure, scalable, and efficient automation solutions to transform business processes and enable digital excellence. You will work as part of a cross-functional Agile team to design and implement automation workflows, collaborate with stakeholders, and integrate bots with APIs, databases, and enterprise systems. The role also involves applying software engineering best practices including code testing, documentation, and DevOps alignment. What will your job look like: Key Responsibilities Design, develop, test, and deploy RPA solutions using UiPath and Automation Anywhere. Build modular, reusable automation components with maintainable and scalable logic. Translate business requirements into well-structured, high-performance automation workflows. Write efficient, testable code in C#, VB.NET, or Python as needed for bot scripting and integration. Develop SQL Server queries, stored procedures, and handle structured/unstructured data (JSON, XML, CSV). Integrate bots with REST APIs, web services, and enterprise applications (ERP/CRM). Conduct unit testing, support UAT, and troubleshoot production issues with root cause analysis. Monitor bots post-deployment via UiPath Orchestrator or AA Control Room, ensuring SLA adherence. Create technical documentation including solution design, bot configuration, and deployment steps. Collaborate with Business Analysts, QA engineers, and other developers to ensure high-quality delivery. Work in Agile teams using Jira, participate in sprint planning, and conduct peer code reviews. An Ideal Candidate will have: Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 2–5 years of RPA development experience with strong hands-on knowledge of UiPath and Automation Anywhere. Experience with RPA components like Orchestrator, Control Room, queue management, and credential vaults. Proficiency in C#/.NET, VB.NET, or Python for custom scripting. Familiarity with web technologies: ASP.NET, WebAPI, HTML, JavaScript, CSS, and jQuery. Hands-on experience with SQL Server and handling API integrations. Understanding of software development lifecycle (SDLC), Agile/Scrum methodology, and DevOps tools. Strong debugging, performance tuning, and problem-solving skills. Preferred Skills (Good to Have): UiPath Advanced Developer / Automation Anywhere Advanced Certified RPA Professional Experience with OCR technologies (e.g., ABBYY, Tesseract, Google Vision) or Document Understanding Familiarity with cloud platforms (Azure, AWS) and version control systems (e.g., Git) Exposure to Intelligent Automation, AI/ML, or NLP use cases Experience in regulated industries like healthcare, banking, or insurance Understanding of architecture patterns (retry logic, caching, queue-based processing, etc.) Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive "Cheer" awards "All in to Win" bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a skilled Python Tech Lead with a strong interest in artificial intelligence and machine learning. You will be responsible for leading a team in developing innovative solutions using Python programming language, machine learning algorithms, and deep learning frameworks. Your primary focus will be on understanding project requirements, recommending suitable approaches and technologies, and improving the efficiency of models through iterative development processes. Your key skills should include proficiency in Python, Python Flask, NLP for content extraction, Pandas, OCR, Deep Learning, OpenCV, Tesseract, Computer Vision for image processing, TensorFlow, PyTorch, Keras, Scikit-learn, Paddle, ML algorithms, Neural Networks (CNN), Git, CI/CD, and cloud experience. You must have a minimum of 4+ years of hands-on experience in Python and ML, with a strong background in machine learning algorithms and frameworks. As the Python/ML Tech Lead, you will be expected to possess excellent problem-solving and analytical skills, along with the ability to translate project requirements into actionable development tasks. Effective communication with stakeholders to provide technical insights and updates on project progress is essential. You should also have the capability to mentor and lead a team of developers, ensuring adherence to best practices and coding standards. Nice to have skills include experience with deep learning frameworks like TensorFlow or PyTorch, familiarity with NLP techniques, and previous work on computer vision projects. Demonstrated success in computer vision and text extraction projects would be an added advantage. If you meet the above requirements and are looking for a challenging opportunity in Chennai, India, then we are looking for you to join our team. Immediate joiners are preferred for this position.,
Posted 2 weeks ago
3.0 years
0 Lacs
india
On-site
Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.
Posted 2 weeks ago
2.0 years
8 - 18 Lacs
delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 2 weeks ago
0.0 - 5.0 years
0 - 1 Lacs
delhi, delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 2 weeks ago
7.0 years
0 Lacs
pune, maharashtra, india
Remote
🚀 We’re Hiring: RPA Developers – Automation Anywhere (15–20 Positions | Remote | 4–7+ Years) Are you an experienced RPA Developer or a rising automation enthusiast looking to build innovative bots that transform business processes? Join our Automation Center of Excellence where we are hiring across multiple levels – from junior to senior RPA professionals – to work on impactful automation projects using Automation Anywhere (A360/v11) and modern tech stacks. 📌 Role: RPA Developer – Automation Anywhere or A360 📍 Location: Fully Remote (Pan India) 💼 Experience: 4 to 7+ Years 👥 Openings: 15–20 Positions ⏳ Notice Period: Immediate to 15 Days Max 🔧 Must-Have Skills: ✔ 3+ years of hands-on experience in RPA development ✔ Strong command of Automation Anywhere (v11/A360) ✔ Exposure to IQ Bot , MetaBot, Bot Insight ✔ Integration experience using APIs / web services ✔ Scripting knowledge in .NET, Java, Python, or VBScript ✔ Good understanding of bot lifecycle, exception handling ✔ Familiarity with OCR tools like ABBYY, Tesseract, IQ Bot ✔ Bonus: Experience in UiPath, Blue Prism, Power Automate ✔ SDLC & Agile working methodology experience 🌟 Why Join Us? ✅ Remote work flexibility ✅ Be part of a high-impact Automation CoE ✅ Multiple growth opportunities from mid-level to lead ✅ Competitive salary & project exposure with top clients 📩 Apply Now: Share your CV at zenab@maxohire.com or you can drop your CV on WhatsApp - 8109595034
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a skilled Python Tech Lead with over 5 years of experience, possessing a strong interest in artificial intelligence and machine learning. Your expertise lies in Python programming language, Python Flask, NLP (Content extraction), Pandas, OCR, Deep Learning, OpenCV, Tesseract, Computer Vision (Image Processing), TensorFlow, PyTorch, Keras, Scikit-learn, Paddle, ML Algorithms, Neural Networks (CNN), Git, CI/CD, and Cloud Experience (Any). In this role, you will be responsible for understanding the requirements and objectives of assigned projects and recommending suitable approaches and technologies to deploy in order to achieve the set objectives. You will iterate the development process to enhance the effectiveness of the models and implement parallel processing and other techniques to improve processing speed. Your deep understanding of data extraction with Computer Vision and Deep learning libraries will be crucial for the projects. You should possess excellent problem-solving and analytical skills, along with the ability to translate project requirements and objectives into actionable development tasks. Effective communication with stakeholders for technical insights and project updates is essential. Moreover, you will lead and mentor a team of developers, ensuring adherence to best practices and coding standards. Your strong project management skills, including task prioritization, resource planning, and risk management, will play a vital role in the successful execution of projects. Nice to have skills include experience with deep learning frameworks such as TensorFlow or PyTorch, familiarity with natural language processing (NLP) techniques, and previous work on computer vision projects. Demonstrated experience and success in computer vision and text extraction projects would be an added advantage for this role.,
Posted 2 weeks ago
5.0 years
0 Lacs
coimbatore, tamil nadu, india
On-site
Role Description Role Proficiency: Performs tests in strict compliance independently guides other testers and assists test leads Outcomes Construct test scenarios based on customer user stories or requirements Construct systematic test cases from scenarios for applications based on customer user stories or requirements Execute systematic test cases from scenarios for applications based on customer user stories or requirements Ensure that new or revised components or systems perform to expectation. Ensure meeting of standards; including usability performance reliability or compatibility. Document Test results and report defects Facilitate changes in processes/practices/procedures based on lessons learned from the engagement Develop proficiency of other testers on the project Measures Of Outcomes Timely completion of all tasks # of requirement/user story ambiguities logged Requirements / User story coverage based on test cases/script # of test cases/script developed in comparison to the benchmarks # of test cases/script executed in comparison to the benchmarks # of valid defects Outputs Expected Requirements Management: Participate Seek Clarification Understand Review Domain Relevance Test feature / component with good understanding of the business problem being addressed for the client Conduct gap analysis between requirement fitment and technology stack using technology/domain expertise Reporting Reporting the test activities of a small team including multiple testers Estimate Estimate time effort and resource dependence for work performed Manage Knowledge Consume Contribute Test Design Development Execution Identify testable scenarios and create test scenario document Update RTM Obtain sign off on test scenarios Basis (3) above identify and create test cases and test data Smoke testing for system readiness check Execute test cases / scripts Identify log and track defects Retest Log in productivity data Skill Examples Ability to review user story / requirements to identify ambiguities Ability to design test cases / scripts as per user story / requirements Ability to apply techniques to design efficient test cases / script Ability to set up test data and execute tests Ability to identify anomalies and detail them Knowledge Examples Knowledge of Methodologies Knowledge of Tools Knowledge of Types of testing Knowledge of Testing Processes Knowledge of Testing Standards Additional Comments Job Title: Senior Software Developer – Python Testing Framework About the Role: We are seeking a highly skilled and motivated Senior Software Developer to join our dynamic team. The ideal candidate will have expertise in Python development, PyQt, and Optical Character Recognition (OCR) technologies. You will play a key role in enhancing user test writing workflow with a custom Python Automated Testing Framework. Key Responsibilities: Enhance custom Python based test automation application with new functionality. Improve application UI and user workflow based on feedback from tool users. Optimize performance and accuracy of OCR functionality for various languages. Write clean, maintainable, and well-documented code following best practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of professional experience in software development with a strong focus on Python. Proven experience with Qt (PyQt or PySide) for developing desktop applications. Hands-on experience with OCR technologies and libraries. Strong understanding of image processing and computer vision concepts. Familiarity with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Preferred Qualifications: Experience with the following Python libraries: PyQt, OpenCV, Pillow, Pyinstaller. Experience with the following technologies: Tesseract OCR, ZeroMQ Messaging Skills Cypress, API,BDD,
Posted 2 weeks ago
6.0 years
0 Lacs
hyderābād
On-site
As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. Job Description Key Responsibilities: Lead the development of robust, high-performance web and generative AI applications. Build and deploy GenAI solutions leveraging Retrieval-Augmented Generation (RAG) pipelines for intelligent document querying, summarization, and semantic search. Extract, structure, and process data from complex documents (PDFs, images, scanned forms) using integrated OCR engines (e.g., Tesseract, PaddleOCR) and vision-language models. Develop RAG-based GenAI applications using tools like LangChain and LlamaIndex, and work with vector databases (e.g., FAISS, Weaviate) for efficient embedding-based retrieval. Integrate OCR, LLMs, and vector search into business applications to automate the extraction, understanding, and processing of unstructured content. Design, train, and optimize both Small Language Models (SLMs) and Large Language Models (LLMs) for domain-specific applications, ensuring efficiency and high performance. Develop scalable training pipelines leveraging methods such as supervised fine-tuning, reinforcement learning from human feedback (RLHF), prompt tuning, parameter-efficient fine-tuning (e.g., LoRA, adapters), and knowledge distillation. Fine-tune, evaluate, and deploy language models using advanced techniques like quantization, continual learning, and model distillation to meet evolving business requirements. Analyze and monitor model outputs for quality, bias, and safety, iterating to improve accuracy and model alignment with user and business needs. Architect, design, and implement scalable and secure solutions aligned with business objectives. Mentor and guide a team of developers, fostering a culture of continuous learning and improvement. Design and maintain APIs, including RESTful and GraphQL interfaces, ensuring seamless data exchange with third-party services. Implement and maintain CI/CD pipelines, utilizing automation tools for seamless deployment. Collaborate with cross-functional teams, including DevOps, Security, Data Science, and Product Management. Optimize application performance by efficiently managing resources, implementing load balancing, and optimizing queries. Ensure compliance with industry security standards such as GDPR. Stay updated with emerging technologies, especially in Generative AI, Machine Learning, cloud computing, and microservices architecture. Required Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or a related field. 6+ years of hands-on experience in AI & Python development, with expertise in Django, Flask, or FastAPI. Ability to design and build end-to-end applications and API integrations Proven experience with large language models (LLMs) and AI model development Experience with developing retrieval-augmented generation (RAG). Experience with GenAI tools like Langchain, LlamaIndex, LangGraph, and open-source Vector DBs. Exposure to prompt engineering principles and techniques like chain of thought, in-context learning, tree of thought, etc. Exposure to SLMs. Experience with supervised fine-tuning, reinforcement learning from human feedback (RLHF), prompt tuning, parameter-efficient fine-tuning (e.g., LoRA, adapters), and knowledge distillation. Experience with relational databases such as MS SQL Server, PostgreSQL, and MySQL. Expertise in DevOps practices, including Docker, Kubernetes, and CI/CD tools like GitHub Actions. Deep understanding of microservices architecture and distributed computing principles. Strong knowledge of security best practices in software development. Familiarity with data analytics and visualization tools such as Snowflake, Looker, Tableau, or Power BI. Excellent problem-solving skills and the ability to work independently and within a team. Strong communication and stakeholder management skills. Unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |