Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
5 - 15 Lacs
Ahmedabad
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 18 hours ago
0.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 20 hours ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Description About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities. Show more Show less
Posted 3 days ago
3.0 years
7 - 10 Lacs
Hyderābād
On-site
About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview: We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities: Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills: Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer: Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities.
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Description About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities. Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month Show more Show less
Posted 4 days ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview We don’t simply hire employees. We invest in them. When you work at Chatham, we empower you — offering professional development opportunities to help you grow in your career, no matter if you've been here for five months or 15 years. Chatham has worked hard to create a distinct work environment that values people, teamwork, integrity, and client service. You will have immediate opportunities to partner with talented subject matter experts, work on complex projects, and contribute to the value Chatham delivers every day. As a Manager of the Loan Data Extraction team specializing in institutional real estate clients, your primary responsibility will be to manage the team who will review and extract data from various types of real estate source documents, such as loan agreements, promissory notes, and guarantees, as a pivotal process in modeling debt portfolios for our clients. You will use your expertise to ensure data is complete, accurate, and timely. You should have a background in real estate investment or data management. You should also have exceptional attention to detail, with the ability to identify and resolve discrepancies or errors in data as well as strong analytical skills with the ability to review and extract data from various types of real estate source documents. You will report to Managing Director – India. In This Role You Will Lead the Loan Data Extraction team who will review and extract information from various types of real estate source documents, such as loan agreements and promissory notes, to model loan cashflows, extension details, and prepayment optionality. Collaborate with internal team members and other stakeholders to ensure that projects and deliverables are completed on time and to the satisfaction of clients. Communicate effectively with internal team members and other stakeholders, using strong verbal and written communication skills to convey complex ideas and information associated with the data extraction and quality assurance process. Complete internal training modules to gain critical skills and knowledge needed to complete extraction responsibilities efficiently and effectively. Create and monitor Quality metrics and ensure employee feedback is objective based on SMART goals. Create and maintain updated documentation: Standard Operating Procedures, Process Maps, Defect Definition, and Training Materials. Focus on process improvement and automation initiatives. Your Impact As Manager, you will oversee the Loan Data Extraction process for a client or multiple clients, ensuring that institutional real estate investors receive high-quality, accurate, and timely data solutions. Your leadership will be critical in managing the team’s performance, driving improvements in processes, and ensuring that all deliverables meet the high standards expected by our clients. Contributors To Your Success Post Graduate degree in Commerce, Accounting, Finance, or related fields. 10+ years of experience in financial document processing, credit analysis, loan operations, or a similar field. Proven experience leading a team and managing extraction or operations projects. Strong understanding of loan structures, credit agreements, and key financial covenants. Familiarity with AI/ML tools used for data extraction (e.g., AWS Textract, Google Document AI, Kira, Hyperscience) is a strong advantage. Leadership: Ability to lead and mentor a team while ensuring quality and adherence to processes. Attention to Detail – Precision is critical when extracting loan terms, interest rates, borrower details, and covenants to avoid costly errors. Understanding of Loan Documents – Familiarity with credit agreements, promissory notes, and term sheets helps in accurately identifying and interpreting relevant data. Data Entry Speed and Accuracy – Efficiently inputting data into systems without mistakes ensures smooth downstream processing and compliance. Critical Thinking & Pattern Recognition – Spotting inconsistencies, missing information, or potential red flags requires an analytical mindset. Effective communication skills – Ability to convey complex ideas and information (verbally or in writing) to internal team members and other stakeholders. Real estate familiarity – Experience working with institutional real estate data or clients is a plus. About Chatham Financial Chatham Financial is the largest independent financial risk management advisory and technology firm. A leader in debt and derivative solutions, Chatham provides clients with access to in-depth knowledge, innovative tools, and an incomparable team of over 750 employees to help mitigate risks associated with interest rate, foreign currency, and commodity exposures. Founded in 1991, Chatham serves more than 3,500 companies across a wide range of industries — handling over $1.5 trillion in transaction volume annually and helping businesses maximize their value in the capital markets, every day. To learn more, visit chathamfinancial.com. Chatham Financial is an equal opportunity employer. #LA-onsite #LA Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Kochi, Kerala, India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based dat aExposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn )Familiarity with big data technologie s like Apache Spark, Kafk aExperience with data visualization tool s (Tableau, Power BI, AWS QuickSight )Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databre wAWS Certifications (Data Analytics, Solutions Architect ) Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
POSITION / TITLE: Data Science Lead Location: Offshore – Hyderabad/Bangalore/Pune Who are we looking for? Looking for individuals with 6+ years of experience implementing and managing Data science projects . Working knowledge of Machine and Deep learning based client projects, MVPs, and POCs. Should have expert level experience with machine learning frameworks like scikit-learn, tensorflow, keras and deeplearning architectures like RNNs and LSTM. Should have worked with cognitive services from major cloud platforms like AWS (Textract, Comprehend) or Azure cognitive services etc. and have a working knowledge of SQL and no-SQL databases and microservices. Should be adapt at Python Scripting. Experience on NLP and Text Analytics is preferred Responsibilities Technical Skills – Must have: Knowledge of Natural Language Processing(NLP)techniques and frameworks like Spacy, NLTK, etc. and good knowledge of Text Analytics Should have strong understanding & hands on experience with machine learning frameworks like scikit-learn, tensorflow, keras and deep learning architectures like RNNs and LSTM , BERT Should have worked with cognitive services from major cloud platforms like AWS and have a working knowledge of SQL and no-SQL databases. Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization Strong understanding of evaluation and monitoring metrics for machine learning projects Strong understanding of containerization using docker and Kubernetes to get the models into production Ability to Translate complex machine learning problem statements into specific deliverables and requirements Adept at Python Scripting Technical Skills – Good To Have Knowledge of distributed computing frameworks and cloud ML frameworks including AWS. Experience in natural language processing, computer vision, or deep learning. Certifications or courses in data science, analytics, or related fields. Should exhibit diligence and meticulousness in working with data Other Skills We'd Appreciate 4+ years of experience in the Data Science and Machine Learning techniques Proven track record of getting ML models into production Hands-on experience with writing ML models with Python. Prior experience in ML platforms and tools such as Dataiku, DataBricks, etc. would be a plus Education Qualification Bachelor's degree in Computer Science, Information Technology, or related field (Master's degree preferred). Process Skills General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Skill in gathering and documenting user requirements and writing technical specifications. Behavioral Skills Good Attitude and Quick learner. Well-developed design, analytical & problem-solving skills Strong oral and written communication skills Excellent team player, able to work with virtual teams. Self-motivated and capable of working independently with minimal management supervision. Certification Having Machine Learning or AI certifications would be an added advantage. Show more Show less
Posted 5 days ago
12.0 years
5 - 6 Lacs
Indore
On-site
Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895
Posted 6 days ago
2.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Description – Digital Transformation and Automation Lead About the Role - Drive the digital backbone of a growing commercial real-estate group. - You’ll prototype, test and ship automations that save our teams > 10 hours/week in the first 90 days Total Experience - 2-3 years Availability ~40 hrs/week, 4 days on-site, 1 day remote Core Responsibilities 1. Systems Audit & Consolidation – unify Google Workspace tenants, rationalise shared drives. 2. Database & CRM Build-out – design, deploy, and maintain occupant tracker and a lightweight CRM; migrate legacy data. 3. Automation & Integration – link CRM, Google Sheets, and Tally using Apps Script/Zoho Flow/Zapier. 4. Process Documentation – own the internal wiki; keep SOPs and RACI charts current. 5. Dashboards & Reporting – craft Looker Studio boards for collections, projects, facility KPIs. 6. User Training & Support – deliver monthly clinics; teach teams how to use G Suite, ChatGPT to improve productivity 7. Security & Compliance – enforce 2FA, backup policies, basic network hygiene. 8. Vendor Co-ordination – liaise with Zoho, Tally consultants, ISP/MSP vendors; manage small capex items. Required Skills & Experience Domain Skill Level Workspace & Security ★ LAN/Wi-Fi basics & device hardening Core Automation & Low-Code ★ Apps Script or Zoho Creator/Flow; REST APIs & webhooks Core ★ Workflow bridges (Zapier / Make / n8n) Core • Cursor, Loveable, or similar AI-driven low-code tools Bonus Data Extraction & Integrations ★ Document AI / OCR stack for PDF leases (Google DocAI, Textract, etc.) Core ★ Tally Prime ODBC/API Core CRM & Customer-360 ★ End-to-end rollout of a CRM (Zoho/Freshsales) (migration, custom modules) Core • Help-desk tooling (Zoho Desk, Freshdesk) Bonus Analytics & Reporting ★ Advanced Google Sheets (ARRAYFORMULA, QUERY, IMPORTRANGE) and Looker Studio dashboards Core • Data-warehouse concepts (BigQuery/Redshift) for unified customer view Bonus Programming & Scripting ★ Python or Node.js for lightweight cloud functions / ETL Core ★ Prompt-engineering & Gen-AI APIs (OpenAI, Claude) for copilots Core Project & Knowledge Management • Trello (or equivalent Kanban) Bonus ★Notion / Google Sites for wiki & SOPs Core Soft Skills ★ Clear documentation & bilingual (English/Hindi) training; stakeholder comms Core Compensation - 40 – 50 k p.m Show more Show less
Posted 6 days ago
0.0 - 18.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. Our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have onboarded more than 17000 enterprises, 6200 investors, and lenders and facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. Requirements Key Responsibilities: Lead and mentor a dynamic Data Science team in developing scalable, reusable tools and capabilities to advance machine learning models, specializing in computer vision, natural language processing, API development and Product building. Drive innovative solutions for complex CV-NLP challenges, including tasks like image classification, data extraction, text classification, and summarization, leveraging a diverse set of data inputs such as images, documents, and text. Collaborate with cross-functional teams, including DevOps and Data Engineering, to design and implement efficient ML pipelines that facilitate seamless model integration and deployment in production environments. Spearhead the optimization of the model development lifecycle, focusing on scalability for training and production scoring to manage significant data volumes and user traffic. Implement cutting-edge technologies and techniques to enhance model training throughput and response times. Required Experience & Expertise: 3+ years of experience in developing computer vision models and applications. Extensive knowledge and experience in Data Science and Machine Learning techniques, with a proven track record in leading and executing complex projects. Deep understanding of the entire ML model development lifecycle, including design, development, training, testing/evaluation, and deployment, with the ability to guide best practices. Expertise in writing high-quality, reusable code for various stages of model development, including training, testing, and deployment. Advanced proficiency in Python programming, with extensive experience in ML frameworks such as Scikit-learn, TensorFlow, and Keras and API development frameworks such as Django, Fast API. Demonstrated success in overcoming OCR challenges using advanced methodologies and libraries like Tesseract, Keras-OCR, EasyOCR, etc. Proven experience in architecting reusable APIs to integrate OCR capabilities across diverse applications and use cases. Proficiency with public cloud OCR services like AWS Textract, GCP Vision, and Document AI. History of integrating OCR solutions into production systems for efficient text extraction from various media, including images and PDFs. Comprehensive understanding of convolutional neural networks (CNNs) and hands-on experience with deep learning models, such as YOLO. Strong capability to prototype, evaluate, and implement state-of-the-art ML advancements, particularly in OCR and CV-NLP. Extensive experience in NLP tasks, such as Named Entity Recognition (NER), text classification, and on finetuning of Large Language Models (LLMs). This senior role is tailored for visionary professionals eager to push the boundaries of CV-NLP and drive impactful data-driven innovations using both well-established methods and the latest technological advancements. Benefits We are committed to creating a diverse environment and are proud to be an equal-opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age. Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. Our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have onboarded more than 17000 enterprises, 6200 investors, and lenders and facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. Requirements Key Responsibilities: Join a dynamic Data Science team as a CV-NLP Engineer, where you'll develop reusable tools and capabilities for building advanced machine learning models. Tackle cutting-edge CV-NLP challenges, including image classification, data extraction, text classification, and summarization, using images, documents, and text data. Collaborate closely with DevOps and Data Engineering teams to create efficient ML pipelines, ensuring seamless integration and deployment of models into production environments. Accelerate the model development lifecycle, ensuring scalability for training and production scoring to handle large volumes of data and user traffic. Optimize model training throughput and response times using the latest technologies and techniques. Required Experience & Expertise: 1-3 years of experience in developing computer vision models and applications. Foundational knowledge in API Development and experience in Data Science and Machine Learning techniques. Strong understanding of the complete ML model development lifecycle, including development, training, testing/evaluation, and deployment. Proficient in writing reusable code for various ML stages, such as model training, testing, and deployment. Hands-on experience in Python programming. Proven track record in developing solutions for ML problems using frameworks like Scikit-learn, TensorFlow, Keras, etc. Experience solving OCR challenges with pre-trained models and libraries such as Tesseract, Keras-OCR, EasyOCR, etc. Skilled in developing reusable APIs for integrating OCR capabilities with various applications. Familiarity with public cloud OCR services like AWS Textract, GCP Vision etc. Experience in integrating OCR solutions into production systems for extracting text from diverse images, PDFs, and other document types. Solid understanding of CNN concepts and experience with deep learning models such as YOLO. Ability to prototype, evaluate, and incorporate the latest ML advancements, particularly in OCR. Experience in NLP tasks, including Named Entity Recognition (NER), text classification. Experience with Large Language Models (LLMs). This role is for those who are enthusiastic about pushing the boundaries of what's possible in CV-NLP, leveraging both established and cutting-edge methodologies. Benefits We are committed to creating a diverse environment and are proud to be an equal-opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age. Show more Show less
Posted 1 week ago
0 years
3 - 5 Lacs
Ahmedabad
On-site
Data Engineer Location: Ahmadabad, Surat & Mumbai Apply Now : https://forms.office.com/r/z987VFhAH1 999 8741 755 Role Summary: The Data Engineer sets up data ingestion pipelines, normalizes incoming data, and ensures clean, structured data for AI models and downstream agents. Key Responsibilities: Set up Kinesis Data Streams and Lambda triggers for real-time data ingestion. Use AWS Textract to parse DA documents and store them in S3/RDS. Develop ETL jobs using AWS Glue for historical data transformation. Manage data schemas in RDS . Implement data validation rules and maintain data quality standards. Optimize S3 lifecycle and storage policies (Intelligent-Tiering, Glacier). Skills & Experience: Experience in data pipelines (Glue, Kinesis, Lambda). Proficient in SQL (Postgres) and DynamoDB modeling. S3 and object storage best practices. Python (Pandas, data processing libraries). Data governance and security awareness. Job Type: Full-time Pay: ₹350,000.00 - ₹500,000.00 per year Work Location: In person
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About You – Experience, Education, Skills, And Accomplishments Holding a Bachelor's in Engineering or a Master's degree (BE, ME, B.Tech, M.Tech, MCA, MS) with strong communication and reasoning abilities is required. Over 5 years of hands-on technical experience using AWS serverless resources, including but not limited to ECS, Lambda, RDS, API Gateway, S3, Cloudfront, and ALB. Over 8 years of experience independently developing modules in one or more of the following: Python, Web development, JavaScript/TypeScript, and Containers. Experience in design and development of web-based applications using NodeJS. Experience with modern JavaScript framework (Vue.js, Angular, React), UI Testing (Puppeteer, Playwright, Selenium). Experience working in a CI/CD setup with multiple environments, and with an ability to manage code and deployments towards incrementally faster releases. Experience with RDBMS and NoSQL databases, particularly MySQL or PostgreSQL. Additionally, It Would Be Advantageous If You Have Experience in Terraform or similar, and IAC in general. Familiarity with AWS Bedrock. Experience with OCR engines and solutions, e.g. AWS Textract, Google Cloud Vision. Interest in exploring and adopting Data Science methodologies, and AI/ML technologies to optimize project outcomes. What will you be doing in this role? Overall, you will play a pivotal role in driving the success of the development projects and achieving business objectives through innovative and efficient agile software development practices. Provide technical guidance to dev team so Proof of Concepts can be productionized. Drive and execute productionizing activities. Identify and pursue opportunities for reuse across team boundaries. Quickly and efficiently resolve complex technical issues by analysing information, evaluating options, and executing decisions. Participate in technical design discussions and groups for feature development. Understand the impact of architecture and hosting strategies on technical design and apply industry best practices in software development, including unit testing, object-oriented design, and code reviews. Work with team members to address findings from security, functionality, and performance tests. Conduct detailed code reviews for intricate solutions, offering enhancements where feasible. Prioritize security and performance in all implementations. About The Team Our team comprises driven professionals who are deeply committed to leveraging technology to make a tangible impact in our field of the patent services area. Joining us, you'll thrive in a multi-region, cross-cultural environment, collaborating on cutting-edge technologies with a strong emphasis on a user-centric approach. At Clarivate, we are committed to providing equal employment opportunities for all persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Clarivate is on the lookout for a Sr. Software Engineer ML (machine learning) to join our Patent Service team in Noida . The successful candidate will be responsible focus on supporting machine learning (ML) projects, for deploying, scaling, and maintaining ML models in production environments, working closely with data scientists, ML engineers, and software developers to architect robust infrastructure, implement automation pipelines, and ensure the reliability and scalability of our ML systems. The ideal candidate should be eager to learn, equipped with strong hands-on technical and analytical thinking skills, have a passion for teamwork, and staying updated with the latest technological trends. About You – Experience, Education, Skills, And Accomplishments Holding a Bachelor's in Engineering or a Master's degree (BE, ME, B.Tech, M.Tech, MCA, MS) with strong communication and reasoning abilities is required. Proven experience as a Machine Learning Engineer or similar position Deep knowledge of math, probability, statistics and algorithms Outstanding analytical and problem-solving skills Understanding of data structures, data modeling and software architecture Good understanding of ML concepts and frameworks (e.g., TensorFlow, Keras, PyTorch) Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas Expertise in Prompt engineering . Expertise in visualizing and manipulating big datasets Working experience for managing ML workload in production Implement and/ or practicing MLOps or LLMOps concepts Additionally, It Would Be Advantageous If You Have Experience in Terraform or similar, and IAC in general. Familiarity with AWS Bedrock. Experience with OCR engines and solutions, e.g. AWS Textract, Google Cloud Vision. Interest in exploring and adopting Data Science methodologies, and AI/ML technologies to optimize project outcomes. Experience working in a CI/CD setup with multiple environments, and with an ability to manage code and deployments towards incrementally faster releases. Experience with RDBMS and NoSQL databases, particularly MySQL or PostgreSQL. What will you be doing in this role? Overall, you will play a pivotal role in driving the success of the development projects and achieving business objectives through innovative and efficient agile software development practices. Designing and developing machine learning systems Implementing appropriate ML algorithms, analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability Running machine learning tests and experiments, perform statistical analysis and fine-tuning using test results, training and retraining systems when necessary Implement monitoring and alerting systems to track the performance and health of ML models in production. Ensure security best practices are followed in the deployment and management of ML systems. Optimize infrastructure for performance, scalability, and cost efficiency. Develop and maintain CI/CD pipelines for automated model training, testing, and deployment. Troubleshoot issues related to infrastructure, deployments, and performance of ML models. Stay up to date with the latest advancements in ML technologies, and evaluate their potential impact on our workflows. About The Team Our team comprises driven professionals who are deeply committed to leveraging technology to make a tangible impact in our field of the patent services area. Joining us, you'll thrive in a multi-region, cross-cultural environment, collaborating on cutting-edge technologies with a strong emphasis on a user-centric approach. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Document Extraction and Inference Engineer with expertise in traditional machine learning algorithms and rule based NLP techniques. The ideal candidate will have a strong foundation in document processing, structured data extraction, and inference modeling using classical ML approaches. You will work on designing, implementing, and optimizing document extraction pipelines for various applications, ensuring accuracy and efficiency. Key Responsibilities Develop and implement document parsing and structured data extraction techniques. Utilize OCR (Optical Character Recognition) and pattern-based NLP for text extraction. Optimize rulebased and statistical models for document classification and entity recognition. Design feature engineering strategies for improving inference accuracy. Work with structured and semistructured data (PDFs, scanned documents, XML, JSON). Implement knowledgebased inference models for decisionmaking applications. Collaborate with data engineers to build scalable document processing pipelines. Conduct error analysis and improve extraction accuracy through iterative refinements. Stay updated with advancements in traditional NLP and document processing techniques. Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or related field. 3+ years of experience in document extraction and inference modeling. Strong proficiency in Python and ML libraries (Scikit-learn, NLTK, OpenCV, Tesseract). Expertise in OCR technologies, regular expressions, and rule-based NLP. Experience with SQL and database management for handling extracted data. Knowledge of probabilistic models, optimization techniques, and statistical inference. Familiarity with cloud-based document processing (AWS Textract, Azure Form Recognizer). Strong analytical and problem-solving skills. Preferred Qualifications Experience with graphbased document analysis and knowledge graphs. Knowledge of time series analysis for document-based forecasting. Exposure to reinforcement learning for adaptive document processing. Understanding of the credit / loan processing domain. Location: Chennai, India Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities Build robust document data extraction pipelines using NLP and OCR techniques Develop and optimize end-to-end workflows for parsing scanned/image-based documents (PDFs, JPGs, TIFFs) and structured files (MS Excel, MS Word). Leverage LLM models (OpenAI GPT, Claude, Gemini etc.) for advanced entity extraction, summarization, and classification tasks. Design and implement Python-based scripts for parsing, cleaning, and transforming data. Integrate with Azure Services for document storage, compute, and secured API hosting (e.g., Azure Blob, Azure Functions, Key Vault, Azure Cognitive Services). Deploy and orchestrate workflows in Azure Databricks (including Spark and ML pipelines). Build and manage API calls for model integration, rate-limiting, and token control using AI gateways. Automate results export into SQL/Oracle databases and enable downstream access for analytics/reporting. Handle diverse metadata requirements, and create reusable, modular code for different document types. Optionally visualize and report data using Power BI and export data into Excel for stakeholder review. Technical Skills Required Skills & Qualifications: Strong programming skills in Python (Pandas, Regex, Pytesseract, spaCy, LangChain, Transformers, etc.) Experience with Azure Cloud (Blob Storage, Function Apps, Key Vaults, Logic Apps) Hands-on with Azure Databricks (PySpark, Delta Lake, MLFlow) Familiarity with OCR tools like Tesseract, Azure OCR, AWS textract, or Google Vision API Proficient in SQL and experience with Oracle Database integration (using cx_Oracle, SQLAlchemy, etc.) Experience working with LLM APIs (OpenAI, Anthropic, Google, or Hugging Face models) Knowledge of API development and integration (REST, JSON, API rate limits, authentication handling) Excel data manipulation using Python (e.g., openpyxl, pandas, xlrd) Understanding of Power BI dashboards and integration with structured data sources Nice To Have Experience with LangChain, LlamaIndex, or similar frameworks for document Q&A and retrieval-augmented generation (RAG) Background in data science or machine learning CI/CD and version control (Git, Azure DevOps) Familiarity with Data Governance and PII handling in document processing Soft Skills Strong problem-solving skills and an analytical mindset Attention to detail and ability to work with messy/unstructured data Excellent communication skills to interact with technical and non-technical stakeholders Ability to work independently and manage priorities in a fast-paced environment Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Application Deadline: 30th May 2025 We are seeking an experienced Senior Developer to lead the engineering behind our Core AI Orchestration Platform, leveraging LangGraph, LangChain, and cutting-edge LLMs. You’ll design, build, and scale a multi-agent system for document parsing, contract validation, and workflows, with a focus on performance, explainability, and real-time traceability. You will get an opportunity to shape a next-gen AI product with a true global team and work with edge tools (LangGraph, Claude, GPT-4.1) You will work at the intersection of backend APIs, AI pipeline orchestration, and frontend dashboards, bringing together structured reasoning, vision models, and document intelligence. Apply now and help us build the orchestration layer powering the next generation of intelligent systems. Key Responsibilities Implement multi-agent workflows using LangGraph and LangChain, enabling conditional routing, tool invocation, and memory-based decisions. Integrate LLMs (Claude 3, GPT-4.1) and Vision models (Claude Opus, OpenAI Vision) f or document understanding and structured output generation. Build robust APIs using FastAPI, including support for async processing, webhook-based triggers, and job queues. Implement PDF/DOCX parsing pipelines using Textract, Unstructured.io, and combine with RAG-based retrieval for clause-level reasoning. Manage and optimize data pipelines leveraging Supabase Postgres, pgvector, and Amazon S3 f or structured and unstructured storage. Build internal tools and dashboards using Next.js, React, and Tailwind CSS for audit workflows, feedback loops, and reviewer management. Own deployment and DevOps workflows. Set up observability and testing infrastructure using LangSmith or LangFuse, with monitoring. Requirements 6+ years of hands-on development experience (Python + JS preferred) Deep understanding of LLM integration, prompt engineering, and RAG systems Proven experience building async-ready APIs and document processing pipelines Strong understanding of Postgres schemas, joins, indexing, and pgvector usage Familiarity with Next.js and frontend best practices DevOps comfort with EC2, Docker, and CI/CD Bonus: Experience with LangGraph, LangSmith, or Bedrock/OpenAI SDKs Prior experience with multi-agent LLM systems Background in document intelligence or compliance tooling Experience scaling real-time dashboards for multi-user environments. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Summary Job Title: Manager - Government and Public Services Enabling Areas (GPS EA). The Team: GPS GSi The Role: Senior Data Scientist The Team: Do you have a strong background in machine learning and deep learning? Are you interested in utilizing your data science skills and collaborating with a small team in a fast-paced environment to achieve strategic mission goals? If so, Deloitte has an exciting opportunity for you! As a member of our GPS GSi group, you will play a crucial role in the development and maintenance of our data science and business intelligence solutions. This role will specialize in assisting with machine learning, deep learning, and generative AI initiatives that will be utilized by Enabling Area professionals to enhance and expedite decision-making. You will provide expertise within and across business teams, demonstrate the ability to work independently / as a team, and apply problem-solving skills to resolve complex issues. Work you will do. Technology: Deliver exceptional client service. Maximizes results and drives high performance from people while fostering collaboration across businesses and geographies Interfacing with business customers and leadership to gather requirements and deliver complete Data Engineering, Data Warehousing, and BI solutions. Design, train, and deploy machine learning and deep learning models to AWS, Databricks, and Dataiku platforms. Develop, design, and/or advise on Large Language Model (LLM) solutions for enterprise-wide documentation (e.g., Retrieval-Augmented Generation (RAG), Continued Pre-training (CPT), Supervised Fine-tuning (SFT), etc.) Utilize Machine Learning Operations (MLOps) pipelines, including knowledge of containerization (Docker) and CI/CD for training and deploying models. Maintain structured documentation of project development stages, including the utilization of GitHub and/or Jira for version control and project management. Demonstrate effective communication skills with the ability to provide expertise and break down complex analytical solutions to explain to clients. Remain current with latest industry trends and developments in data science and/or related fields, with the ability to learn new skills and knowledge to advance the skillset of our Data Science team. Apply thorough attention to detail, and carefully review data science solutions for accuracy and quality. Leadership: Develop high-performing teams by providing challenging and meaningful opportunities, and acknowledge their contributions to the organization's success. Establish the team's strategy and roadmap, prioritizing initiatives based on their broader business impact. Demonstrate leadership in guiding both US and USI teams to deliver advanced technical solutions across the GPS practice. Serve as a role model for junior practitioners, inspiring action and fostering positive behaviors. Pursue new and challenging initiatives that have a positive impact on our Practice and our personnel. Establish a reputation as a Deloitte expert and be acknowledged as a role model and senior member by client teams. Support and participate in the recognition and reward of junior team members. People Development: Actively seek, provide, and respond to constructive feedback. Offer development guidance to the GSi team, enhancing their people, leadership, and client management skills. Play a pivotal role in recruitment and the onboarding of new hires. Engage in formal performance assessment activities for assigned staff and collaborate with Practice leadership to address and resolve performance issues. Serve as an effective coach by helping counselees identify their strengths and opportunities to capitalize on them. Foster a "One Team" mindset among US and USI team members. Qualifications: Required/Preferred: Bachelor's degree, preferably in Management Information Systems, Computer Science, Software Engineering, or related IT discipline Minimum of 10+ years of relevant experience with data science technologies and analytics advisory or consulting firms. Strong knowledge of LLMs and RAG. Familiarity with AWS, Databricks, and/or Dataiku platforms. Working knowledge of MLOps, including familiarity with containerization (e.g., Docker). Excellent troubleshooting skills and the ability to work independently. Strong organizational skills, including clear documentation of projects and ability to write clean code. Familiarity with agile project methodology and/or project development lifecycle. Experience with GitHub for version control. Excellent communication and presentation skills, with the ability to explain complex data science concepts to non-technical audiences. Ability to complete work in an acceptable timeframe and manage a variety of detailed tasks and responsibilities simultaneously and with accuracy to meet deadlines, goals, and objectives and satisfy internal and external customer needs related to the job. Extensive experience with MLOps and associated serving frameworks (i.e., Flask, FastAPI, etc.) and orchestration pipelines (e.g., Sage Maker Pipelines, Step Functions, Metaflow, etc.). Extensive experience working with open source LLMs (e.g., serving via TGI / vLLM, performing CPT and/or SFT, etc.). Experience using various AWS Services (e.g., Textract, Transcribe, Lambda, etc.). Proficiency in basic front-end web development (e.g., Streamlit). Knowledge of Object-Oriented Programming (OOP) concepts. At least 3-4 years of people management skills are required. Work Location:Hyderabad Timings: 2 PM – 11PM How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities— including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, worldclass learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities.We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302611 Show more Show less
Posted 3 weeks ago
0.0 - 3.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
We are seeking a highly skilled and motivated MEAN Stack Developer to join our International IT client's team. Key Responsibilities: Design, implement, and manage CI/CD pipelines using GitLab, Jenkins, and Bitbucket. Administer Linux servers, including networking configurations, DNS, and system troubleshooting. Maintain artifact repositories and Artifactory systems. Utilize a wide range of AWS services: EC2, S3, ECS, RDS (Postgres), Lambda (Python runtime), DynamoDB, Comprehend, Textract, and SageMaker for ML deployments. Optimize AWS resource usage for performance and cost-efficiency. Develop infrastructure using Terraform and manage Infrastructure as Code (IaC) workflows. Deploy and manage Kubernetes clusters, including EKS, and work with microservices architecture, load balancers, and database replication (Postgres, MongoDB). Hands-on experience with Redis clusters, Elasticsearch, and Amazon OpenSearch. Integrate monitoring tools such as CloudWatch, Grafana, and implement alerting solutions. Support DevOps scripting using tools like AWS CLI, Python, PowerShell, and optionally FileMaker. Implement and maintain automated troubleshooting, system health checks, and ensure maximum uptime. Collaborate with development teams to interpret test data and meet quality goals. Create system architecture diagrams and provide scalable, cost-effective solutions to clients. Implement best practices for network security, data encryption, and overall cybersecurity. Stay current with industry trends and introduce modern DevOps tools and practices. Ability to handle client interviews with strong communication. Key Skills & Requirements: 3–4 years of experience in DevOps roles. Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, Bitbucket). Proficiency with AWS cloud infrastructure, including serverless technologies. Experience with Docker, Kubernetes, and IaC tools like Terraform. Expertise in Linux systems, networking, and scripting (Python, Shell, PowerShell). Experience working with Postgres, MongoDB, and DynamoDB. Knowledge of Redis, Elasticsearch, and monitoring tools (CloudWatch, Grafana). Understanding of microservices architecture, performance optimization, and security. Preferred Qualifications: Hands-on experience with GCP and services like BigQuery, Composer, Airflow, and Pub/Sub is a plus point. Design and Experience deploying applications on Vercel. Knowledge of AWS ML and NLP services (Comprehend, Textract, SageMaker). Familiarity with streaming data platforms and real-time pipelines. AWS Certification or AWS Solutions Architect, Kubernetes certification is a strong plus. Strong leadership and cross-functional collaboration skills. Job Types: Full-time, Permanent Pay: ₹540,000.00 - ₹660,000.00 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Would you please share your Current CTC, Expected CTC and Notice Period? Experience: DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9727330030
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
About the Role Drive the digital backbone of a growing commercial real-estate group. You’ll prototype, test and ship automations that save our teams > 10 hours/week in the first 90 days Availability ~20 hrs/week (flexible), Gurgaon/remote hybrid. Engagement Model - On-site 1 day/wk during rollout peaks Compensation ₹55–70 k per month. Core Responsibilities 1. Systems Audit & Consolidation – unify Google Workspace tenants, rationalise shared drives. 2. Database & CRM Build-out – design, deploy, and maintain occupant tracker and a lightweight CRM; migrate legacy data. 3. Automation & Integration – link CRM, Google Sheets, and Tally using Apps Script/Zoho Flow/Zapier. 4. Process Documentation – own the internal wiki; keep SOPs and RACI charts current. 5. Dashboards & Reporting – craft Looker Studio boards for collections, projects, facility KPIs. 6. User Training & Support – deliver monthly clinics; teach teams how to use G Suite, ChatGPT to improve productivity 7. Security & Compliance – enforce 2FA, backup policies, basic network hygiene. 8. Vendor Co-ordination – liaise with Zoho, Tally consultants, ISP/MSP vendors; manage small capex items. 🔧 Required Skills & Experience We’re looking for a hands-on builder with a strong track record in automation, low-code systems, and internal tooling. The ideal candidate will bring most (not necessarily all) of the following: ⚙️ Automation & Low-Code Workflows Practical experience building solutions with Google Apps Script or Zoho Creator/Flow , including REST APIs and webhooks Familiarity with workflow bridges like Zapier, Make, or n8n Bonus: Exposure to AI-based low-code tools like Cursor or Loveable 📄 Data Extraction & Integrations Hands-on experience using OCR/Document AI tools (e.g. Google DocAI, AWS Textract) to parse and structure lease or legal documents Familiarity with Tally Prime integrations via API or ODBC for syncing financial data 📇 CRM & Customer View Experience with end-to-end CRM rollouts (Zoho/Freshsales preferred), including data migration and module customization Bonus: Exposure to helpdesk tools like Zoho Desk or Freshdesk 📊 Analytics & Reporting Advanced proficiency in Google Sheets (ARRAYFORMULA, QUERY, IMPORTRANGE) Experience designing interactive dashboards in Looker Studio Bonus: Awareness of data warehousing concepts (BigQuery, Redshift) for creating a unified customer view 🧠 Scripting & AI Comfortable writing Python or Node.js scripts for light-weight cloud functions and ETL Experience using OpenAI/Claude APIs to build small copilots or automations (e.g., résumé rankers, document summarizers) 📋 Project & Knowledge Management Bonus: Familiarity with Trello or other Kanban-style project boards Strong documentation skills with Notion or Google Sites for building wikis, SOPs, and internal help resources 🗣️ Soft Skills Able to explain technical systems clearly to non-technical stakeholders Comfortable training teams in both English and Hindi 📩 How to Apply? If this sounds like you, please apply via this short form : 👉 https://forms.gle/3gPwMqnadpf3dP159 We’ll review responses daily. If you clear the knockout round, you’ll receive a 30-minute skills test within 24 hours. Show more Show less
Posted 3 weeks ago
8 - 12 years
0 Lacs
Mumbai, Maharashtra, India
Remote
We are seeking a talented individual to join our Data Science team at Marsh. This role will be based in Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Manager - Data Science and Automation We will count on you to: Identify opportunities which add value to the business and make the process more efficient. Invest in understand the core business including products, process, documents, and data points with the objective of identifying efficiency and value addition opportunities. Design and develop end-to-end NLP/LLM solutions for document parsing, information extraction, and summarization from PDFs and scanned text. Develop AI applications to automate manual and repetitive tasks using generative AI and machine learning. Fine-tune open-source LLMs (like LLaMA, Mistral, Falcon, or similar) or build custom pipelines using APIs (OpenAI, Anthropic, Azure OpenAI). Build custom extraction logic using tools like LangChain, Haystack, Hugging Face Transformers, and OCR libraries like Tesseract or Azure Form Recognizer. Create pipelines to convert outputs into formatted Microsoft Word or PDF files using libraries like docx, PDFKit, ReportLab, or LaTeX. Collaborate with data engineers and software developers to integrate AI models into production workflows. Ensure model performance, accuracy, scalability, and cost-efficiency across business use cases. Stay updated with the latest advancements in generative AI, LLMs, and NLP research to identify innovative solutions. Design, develop, and maintain robust data pipelines for extracting, transforming, and loading (ETL) data from diverse sources. As the operational scales up design and implement scalable data storage solutions and integrate them with existing systems. Utilize cloud platforms (AWS, Azure, Google Cloud) for data storage and processing. Conduct code reviews and provide mentorship to junior developers. Stay up-to-date with the latest technology trends and best practices in data engineering and cloud services. Ability to lead initiatives and deliver results by engaging with cross-functional teams and resolving data ambiguity issues. Be responsible for the professional development of your projects and institute a succession plan. What you need to have: Bachelor's degree in Engineering, Analytics, or a related field, MBA, Computer Applications, IT, Business Analytics, or any discipline. Proven experience of 8-12 years in Python development Hands-on experience with frameworks and libraries like Transformers, LangChain, PyTorch/TensorFlow, spaCy, Hugging Face, and Haystack. Strong expertise in document parsing, OCR (Tesseract, AWS Textract, Azure Form Recognizer), and entity extraction. Proficiency in Python and familiarity with cloud-based environments (Azure, AWS, GCP). Experience deploying models as APIs/microservices using FastAPI, Flask, or similar. Familiarity with PDF parsing libraries (PDFMiner, PyMuPDF, Apache PDFBox) and Word generation libraries (python-docx, PDFKit). Solid understanding of prompt engineering and prompt-tuning techniques. Proven experience with data automation and building data pipelines. Proven track record in building and maintaining data pipelines and ETL processes. Strong knowledge of Python libraries such as Pandas, NumPy, and PySpark, Camelot. Familiarity with database management systems (SQL and NoSQL databases). Experience in designing and implementing system architecture. Ability to operate in a multi layered technology architecture and shape the technology maturity of the organization. Solid understanding of software development best practices, including version control (Git), code reviews, and testing frameworks (PyTest, UnitTest). Strong attention to detail and ability to work with complex data sets. Effective communication skills to present findings and insights to both technical and non-technical stakeholders. Specify superior listening, verbal and written communication skills Excellent project management and organization skills Superlative stakeholder management skills – ability to positively influence stakeholders. Synthesis skills- Ability to connect the dots and answer the business question. Excellent problem-solving, structuring and critical-thinking skills. Ability to work independently and collaboratively in a fast-paced environment. What makes you stand out? Master’s degree in Computer Science, Engineering, or related fields. Experience in working with large-scale data sets and real-time data processing. Familiarity with additional programming languages like Java, C++, or R. Strong problem-solving skills and ability to work in a fast-paced environment. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh, a business of Marsh McLennan (NYSE: MMC), is the world’s top insurance broker and risk advisor. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marsh.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_308144 Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2