Jobs
Interviews

1137 Ocr Jobs - Page 28

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Responsibilities Design, develop and implement solutions for a wide range of NLP use cases involving classification, extraction and search on unstructured text data Create and maintain state of the art scalable NLP solutions in Python/ Java/ Scala for multiple business problems. This involves: Choosing most appropriate NLP technique(s) based on business needs and available data Performing data exploration and innovative feature engineering Training and tuning a variety of NLP models / solutions which include regular expressions, traditional NLP models as well as SOTA transformer based models Augmenting models by integrating domain specific ontologies and/or external databases Reporting and Monitoring the solution outcome Work experience with document-oriented databases such as MongoDB Collaborate with ML engineering team to deploy NLP solutions in production - both on premise as well as cloud deployment Interact with clients and internal business teams to perform solution feasibility as well as design and develop solutions Open to working across different domains – Insurance, Healthcare and Financial Services etc. Required Skills Experience (including graduate school) on training machine learning models, applying and developing text mining and NLP techniques Exposure to OCR and computer vision Experience in extracting content from documents is preferred Experience (including graduate school) with Natural Language Processing techniques is required Hands on experience with Natural Language Processing tools such as Stanford CORE-NLP, NLTK, spaCy, Gensim, Textblob etc. Experience/ Familiarity with document clustering in supervised un un-supervised scenarios Expertise in at least two of the state of the art techniques in NLP like BERT, GPT, XL Net etc. Applied experience of machine learning algorithms using Python Organized, self-motivated, disciplined and detail oriented Production level coding experience in Python is required Ability to read recent ML research papers and adapt those models to solve real-world problems Experience with any deep learning framework, including Tensorflow, Caffe, MxNet, Torch, Theano Experience with optimization on GPUs (a plus) Hands on experience with using cloud technologies on AWS/ Microsoft Azure is preferred

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

This job is with Swiss Re, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. About The Team And Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset — bias to action, fast iterations, and ruthless focus on value delivery. We’re not only shaping the future of AI in business — we’re shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You’ll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines: Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy: Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking: Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient — especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs, including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python, embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices, including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures, tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you’re excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery — we’ll give you space to grow into it. This is a role where engineering and product are not silos . If you’re keen to move in that direction, we’ll mentor and support your evolution. Why Join Us? You’ll be part of a team that’s pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You’ll prototype fast, deliver often, and see your work shape real-world outcomes — whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. swissre_footer { position: relative; margin-top: -50px; height: 30px; clear: both; margin-bottom: 20px; background: #EEE none repeat scroll 0% 0%; line-height: 30px; padding: 0px 10px; color: #AAA; font-family: "Arial,Helvetica,sans-serif"; } .swissre_jobtemplate { width: 970px; max-width: 100%; height: auto; } .jobDisplay .job { font-family: "Arial" !important; font-size: 12px !important; } .joqReqDescription { max-width: 100%; height: auto; align: center; } .joqReqDescription ul { width: 787px; max-width: 100%; } .joqReqDescription p { width: 827px; max-width: 100%; } Keywords Reference Code: 134317

Posted 1 month ago

Apply

6.0 - 9.0 years

9 - 14 Lacs

Pune

Hybrid

Job DescriptionAdditional Job Description Ecolab is seeking a Lead Business Process Automation Analyst within the company's Global Business Services to support and deliver key initiatives providing project intake, planning, and identification of scalable global tools to address process challenges while enhancing workflow and automation efficiencies.** Location: The position is based in our office in Pune. Shift: UK Shift, 12 to 9pm Work Situation: Hybrid, in office 3 days a week Business Analyst Main Responsibilities: Manage project intake and prioritization of requests Provides coaching and support to other team members Responsible for project management activities and ensuring successful delivery from identification through deployment Drive project delivery from identification through deployment Responsible for driving projects of higher complexity and scope Responsible for driving process design, business requirement definition, design reviews, testing, training support and user adoption Collaborate with business and process improvement teams to evaluate automation opportunities Engage in Vendor and technology selection RFP/RFI Facilitate process reviews to identify automation opportunities and requirements Partner with Ecolab Digital teams to evaluate appropriate technology to solve process challenges Provide analytical and AI / OCR model training and testing support Provide process governance while maintaining strong deployment and/or onboarding controls Monitor solutions to ensure they maintain benefits and efficiencies Keep stakeholders updated regularly, communicate risks, and gather feedback Minimum Qualifications: Bachelors degree with minimum 8 years of professional experience; or advanced degree with minimum 6 years experience Formal project management experience or proven skills, preferably in Finance or Business Services Excellent English written and verbal communication skills Excellent interpersonal skills and ability to partner across teams and levels within the organization Experience with one or more automation platforms such as ServiceNow Preferred Qualifications: Advanced degree preferred Relevant experience in Finance or Business Services processes Green Belt/Black Belt/PMBOK/Scrum/Agile trained and certified Strong interpersonal skills with demonstrated skills to influence decision makers and motivate team members Self-driven, outcomes-oriented performer Proven success initiating change and ability to communicate and influence at all levels of the organization Strong analytical skills Proficient in Excel and PowerPoint Fluent in local language and capable in English Low-code development, various platforms

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role We're looking for a highly skilled, results-driven AI Developer who thrives in fast-paced, high-impact environments. If you are passionate about pushing the boundaries of Computer Vision, OCR, NLP and and Large Language Models (LLMs) and have a strong foundation in building and deploying AI solutions, this role is for you. As a Lead Data Scientist, you will take ownership of designing and implementing state-of-the-art AI products. This role demands deep technical expertise, the ability to work autonomously, and a mindset that embraces complex challenges head-on. Here, you won't just fine-tune pre-trained models—you'll be architecting, optimizing, and scaling AI solutions that power real-world applications. Key Responsibilities Architect, develop, and deploy high-performance AI Solutions for real-world applications. Implement and optimize state-of-the-art LLM , OCR models and frameworks. Fine-tune and integrate LLMs (GPT, LLaMA, Mistral, etc.) to enhance text understanding and automation. Build and optimize end-to-end AI pipelines, ensuring efficient data processing and model deployment. Work closely with engineers to operationalize AI models in production (Docker, FastAPI, TensorRT, ONNX). Enhance GPU performance and model inference efficiency, applying techniques such as quantization and pruning. Stay ahead of industry advancements, continuously experimenting with new AI architectures and training techniques. Work in a highly dynamic, startup-like environment, balancing rapid experimentation with production-grade robustness. What We're Looking For Requirements Required Skills & Qualifications: Proven technical expertise – Strong programming skills in Python, PyTorch, TensorFlow with deep experience in NLP and LLM Hands-on experience in developing, training, and deploying LLM and Agentic workflows Strong background in vector databases, RAG pipelines, and fine-tuning LLMs for document intelligence. Deep understanding of Transformer-based architectures for vision and text processing. Experience working with Hugging Face, OpenCV, TensorRT, and NVIDIA GPUs for model acceleration. Autonomous problem solver – You take initiative, work independently, and drive projects from research to production. Strong experience in scaling AI solutions, including model optimization and deployment on cloud platforms (AWS/GCP/Azure). Thrives in fast-paced environments – You embrace challenges, pivot quickly, and execute effectively. Familiarity with MLOps tools (Docker, FastAPI, Kubernetes) for seamless model deployment. Experience in multi-modal models (Vision + Text). Good to Have Financial background and understanding corporate finance . Contributions to open-source AI projects.

Posted 1 month ago

Apply

10.0 - 17.0 years

0 - 0 Lacs

Chennai

Work from Office

Job Purpose This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners. Key Responsibilities Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems. Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes Job Requirements Qualifications MTech / BE / BTech / MSc in CS or Stats or Maths Experience Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning and Optimization Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow) Functional Competencies A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks

Posted 1 month ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

We are seeking a skilled and motivated Team Lead - Automation to oversee the automation of processes within our Finance, Procurement, and HR functions. This role will be responsible for delivering use cases within their portfolio while also engaging in hands-on development. The Team Lead will manage a small team and serve as the primary point of contact for business stakeholders regarding delivery and automation initiatives. Delivery Management: Act as a point of contact for specific business function to deliver the identified automation use cases and performance of the use cases in production. Build and execute deliver plan for the use cases approved for the delivery in alignment with relevant stakeholder Manage capacity planning needed to deliver the automation pipeline. Collaborate with the Project managers to ensure the projects are on track. Manage delivery governances with relevant stakeholder to communicate automation program status, drive escalation and support needs. Collaborate with IT teams to ensure all IT pre-requisites are delivered on time. Technical Management Act as a Technical Lead to design automation solutions for different business problems. Provide technical assistance to developers as and when needed Perform technical governance on the deliverables of the development team. Perform hand on technical development for the critical use cases. Take care of innovations by performing proof of concepts utilizing advanced technologies like AI, ML, LLMs. Operations Management: Responsible for Incident Management, Change Management for all the live bots in scope. Responsible to manage governance and reporting for operations. Stakeholder Management: Should have excellent stakeholder management skills to understand business expectation and deliver it through multiple teams. Drive multiple initiatives with varied skilled stakeholders across different roles Who you are: Education: bachelors or masters degree in computer science. Proficiency in English (Verbal, Written) Proven experience 8-10 years of experience in developing and delivering Robotic Process Automation use cases for functions like Finance, Procurement, HR. Expertise on different RPA tools like Power Automate Desktop, Automation Anywhere/ UiPath Technical Expertise Programming languages (.Net, Java, VB, Python), Database (SQL), OCR Technologies, Excel Macros Expertise in utilizing RPA capabilities to Automate SAP Application, Web Application and Document Data extraction. Experience in managing delivery of automation use cases. Stakeholder Management Experienced in managing internal & external stakeholder effectively. Secondary Skills: Enterprise Architecture Should be well versed with enterprise architecture landscape. Experience in delivering solutions for GBS domains (Finance, Procurement, HR, Customer Experience etc)

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description: RPA Business Analyst We are seeking a highly skilled and experienced Senior UiPath Business Analyst to drive automation initiatives, leveraging UiPath’s Document Understanding and Agentic AI capabilities. The ideal candidate will work closely with business stakeholders, developers, and project teams to analyze, design, and optimize automated business processes. Responsibilities: As BA Collaborate with business stakeholders to gather, document, and analyze automation requirements. Design and optimize automation workflows using UiPath’s Document Understanding framework. Develop business cases and process models to support automation initiatives. Work closely with RPA developers to ensure business requirements are effectively translated into automation solutions. Utilize UiPath Agentic AI capabilities to enhance process intelligence and decision-making automation. Conduct gap analysis and recommend process improvements for automation feasibility. Lead workshops, training sessions, and demonstrations for business users on automation solutions. Create detailed functional and technical documentation, including PDDs (Process Design Documents) and SDDs (Solution Design Documents). Oversee testing and validation of automation solutions, ensuring alignment with business objectives. Monitor and analyze automation performance, providing recommendations for continuous improvement. Ensure compliance with governance frameworks and best practices in RPA implementations. Assist in change management, adoption, and scaling of automation solutions across the enterprise. Support Understanding system and business change cycles to ensure automated processes are proactively amended to reflect changes. Assisting developers with fixing bugs and enhancing the code of automations in production. Developing strategies to optimize bot schedules to achieve maximum productivity of digital workers. Interpersonal Skills Is a strong team player - collaborates well with others to solve problems and actively incorporates input from various sources Has good organizational skills, to schedule processes, monitor resources, and log issues Is creative out of the box thinker who likes to be challenged. Has good communication skills, written and verbal Has Superior listening skills and is customer service oriented. Pays attention to detail and ability to manage multiple priorities in a fast-paced environment. Perform these functions with minimal supervision Education And Experience Level 4+ years of experience as a Business Analyst, with at least 2+ years in UiPath automation projects. Strong expertise in UiPath’s Document Understanding framework, including AI/ML-based OCR, taxonomy, and classification. Experience working with UiPath Agentic AI solutions to enhance automation and decision-making processes. Knowledge of business process modeling tools such as Visio, or Signavio. Ability to translate complex business processes into automation-ready workflows. Strong analytical and problem-solving skills with a keen eye for detail. Excellent communication and stakeholder management skills. Experience working in Agile/Scrum methodologies. Knowledge of UiPath Orchestrator, Studio, Task Capture and AI Fabric is a plus. Familiarity with data analytics and AI-driven automation solutions. UiPath Business Analyst or related RPA certifications preferred.

Posted 1 month ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Hyderābād

Remote

As a member of the Accounting team, the Accounts Payable Coordinator will operate in a high transaction environment by appropriately accounting for supplier invoice activity in Workday Financials. This role works closely with Accounting, Procurement, and the business to ensure accurate, complete and timely processing of supplier invoices and payments. The ideal candidate can undertake a variety of tasks and work diligently under pressure. They are comfortable working with high attention to detail and incorporating new and effective ways to achieve better results. What You’ll Do: Process invoices & check requests, including entry, matching to approved purchase orders, and monitoring electronic exceptions and automated OCR entry Review submitted expense reports for appropriate support with business rules Confirm and verify payment dates Verify sales tax amounts Create new suppliers and managing supplier changes with appropriate support and approvals Coordinate and prepare weekly check runs Responsible for Month-end A/P accruals Ensure set controls are met for duplicate payments and overcharges What You’ll Bring: At least 2-5 years of A/P experience in a high transaction environment, processing 1,000 invoices a month Experience in Microsoft Office Experience with Workday Financials preferred Strong attention to detail as well as excellent verbal and written communication skills Able to manage self-study training, including the ability to explore existing business operations and procedures as learning materials Stay up to date on everything Blackbaud, Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.

Posted 1 month ago

Apply

5.0 years

5 - 8 Lacs

Vadodara

Remote

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore www.veradigm.com. What will your job look like: Job Summary: We are seeking a skilled .NET Full Stack Developer with 5+ years of experience in designing and developing web applications, including the integration of AI/ML solutions into business applications. The ideal candidate will be proficient in both front-end and back-end development using the Microsoft technology stack and have hands-on experience in leveraging AI APIs, machine learning models , or services like Azure AI, OpenAI, or custom ML models . Key Responsibilities: Develop, test, and maintain scalable web applications using ASP.NET Core, C#, MVC, Web API . Build modern, responsive front-end interfaces using Angular / React / Blazor and integrate with backend APIs. Work with Entity Framework / EF Core and SQL Server / Azure SQL to manage data models and performance. Integrate AI features (e.g., chatbots, recommendation systems, NLP, OCR, or predictive analytics) using APIs or custom ML models. Utilize Azure Cognitive Services , OpenAI , Azure Machine Learning , or similar platforms for AI implementation. Collaborate with data scientists or ML engineers to embed models into production-ready systems. Follow best practices in coding, testing, DevOps (CI/CD), and secure application development. Participate in Agile development processes including planning, code reviews, and retrospectives. An Ideal Candidate will have: 5+ years of experience in .NET development (C#, ASP.NET Core, Web API, MVC). Front-end experience with Angular / React / Blazor , HTML5, CSS, JavaScript/TypeScript. Hands-on experience integrating with AI services or APIs (e.g., OpenAI, Azure Cognitive Services, Google Cloud AI). Experience with RESTful APIs , Entity Framework , and SQL Server . Understanding of cloud platforms like Azure or AWS . Familiarity with Git, CI/CD pipelines, and Agile development. Good analytical, problem-solving, and communication skills. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive "Cheer" awards "All in to Win" bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law #LI-SM1 #LI-REMOTE Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role overview 1) 10+ years of engineering experience, with at least 3–5 years in engineering leadership roles. 2) Deep understanding of identity verification workflows, user onboarding, or KYC/AML systems. 3) Experience integrating with or building APIs for document OCR, biometric verification, AML screening, and fraud detection. 4) Strong system design skills for high-availability, privacy-first, and auditable systems. 5) Familiarity with compliance frameworks such as GDPR, PCI DSS, SOC 2, FATF guidelines, or local eKYC laws. 6) Track record of leading technical teams through scale, complexity, and regulatory change. 7) Excellent communication and stakeholder management skills. Nice to Have 1) Experience working with regulated financial services, crypto, or telecom identity systems. 2) Familiarity with modular identity frameworks (DID, verifiable credentials, reusable KYC). 3) Experience in markets with fragmented ID infrastructure (e.g., Africa, Southeast Asia). What would you do here? Identity & KYC Platform Ownership > Define and drive the architectural vision and roadmap for identity, KYC, and verification systems. > Led the engineering efforts across user onboarding, identity proofing, document OCR, biometric checks, AML screening, and account verification workflows. > Ensure systems comply with regional and international regulations (e.g., AML/CFT, GDPR, eKYC standards). > Build a scalable and pluggable platform that integrates seamlessly with third-party verification vendors. Engineering & Org Leadership > Manage and mentor a team of engineering managers, architects, and ICs across backend, mobile, and platform security teams. > Build a world-class team culture centered on trust, ownership, and quality. > Define org structure, career paths, and hiring plans to support scaling. Security, Compliance & Observability > Implement best practices in data security, encryption, and user privacy across the identity platform. > Partner with compliance and legal teams to adapt the platform to evolving regulatory requirements. > Ensure detailed observability of user verification flows, errors, drop-offs, and audit trails. Cross-Functional Collaboration > Work closely with Product, Risk, Legal, and Operations teams to align roadmap and OKRs. > Support the launch of KYC features in new regions by adopting verification logic and flows. > Own vendor evaluations and partnerships for biometric ID, OCR, AML, and risk scoring systems. Delivery & Incident Management > Drive reliable, timely delivery of new identity features, compliance updates, and platform improvements. > Ensure systems are resilient and maintain high availability (99.9%+ uptime). > Establish incident response processes for sensitive identity workflows and coordinate postmortems Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location: Pune (Work from Office) Experience: 6–8 years in Software Testing Practices + 2+ years in a leadership role About Zitics Zitics is revolutionizing workflow digitization and optimization for financial institutions. By leveraging cutting-edge technologies like AI/ML models , cloud platforms , and robust integration frameworks, we transform complex workflows into efficient, scalable, and secure ecosystems. Join us to be a key contributor in building transformative technologies that shape the future of FinTech. What You’ll Do As a Test Lead at Zitics, you will play a critical role in ensuring our multi-application FinTech ecosystem is reliable, secure, scalable, and compliant. You will lead the QA team and collaborate with cross-functional stakeholders to deliver high-quality software across web, API, and data processing layers. Lead End-to-End Test Strategy & Execution · Define and implement comprehensive QA strategies across Zitics products . · Establish and manage functional, regression, integration, UAT, and performance test plans aligned to sprint deliverables and release milestones. · Ensure 100% traceability from user stories to test cases to align with ISO 27001/9001/42001 compliance. Drive Automation at Scale · Champion the adoption and enhancement of automation tools like Selenium, Cypress, Postman, and Jest for frontend, backend, and API testing. · Integrate test suites into CI/CD pipelines (Jenkins/Bitbucket) for nightly runs and automated regression checks. · Collaborate with DevOps to ensure seamless execution of automated smoke and sanity tests during deployments. Validate AI & Document Processing Pipelines · Design and execute test cases to validate AI-generated data from OCR and document mapping modules. · Verify unstructured-to-structured data transformations and ensure mapping integrity against dynamic JSON templates and business rules. Ensure Security, Role-Based Access & Compliance Coverage · Test access control logic (RBAC/UBAC), data segregation, and security validations for compliance with financial and data protection regulations. · Work closely with compliance and product teams to ensure test artifacts meet audit requirements. API and Microservice Validation · Develop and manage robust test plans for GraphQL and REST APIs including auth, token management, schema validations, and payload consistency. · Simulate real-world scenarios and test for performance, failure cases, and edge conditions. Lead, Mentor, and Grow the QA Team · Manage test engineers across multiple modules; assign tasks, review outcomes, and ensure effective workload distribution. · Conduct regular knowledge-sharing sessions, QA reviews, and retrospectives to build a quality-first culture. Report, Communicate & Improve Continuously · Track quality metrics, bug trends, and test coverage through dashboards and regular reports. · Proactively identify risks, bottlenecks, and quality gaps—escalate or resolve them with data-backed recommendations. · Collaborate closely with product managers, developers, and release engineers to ensure every release meets quality gates. What We’re Looking For We’re looking for a seasoned QA professional and hands-on leader who thrives in fast-paced, product-driven environments and is passionate about delivering robust, secure, and compliant platforms. You should bring both strategic thinking and tactical execution to ensure quality across the entire software lifecycle. Must-Have Skills · 6-8 years of experience in software testing, with 2+ years in a test leadership role. · Proven expertise in manual and automated testing for web applications, microservices, and APIs. · Strong experience with test automation tools like Selenium, Cypress, Postman, Jest, or similar. · Hands-on experience in testing GraphQL/REST APIs, including schema validation and auth testing (OAuth2, JWT). · Solid understanding of RBAC/UBAC access control testing, database validation (MySQL, NoSQL), and business logic coverage. · Familiarity with CI/CD pipelines (Jenkins/Bitbucket) and integrating automated test suites into deployment workflows. · Experience with performance and basic security testing to ensure product scalability and compliance. · Understanding of ISO 27001/9001/42001 or other security/compliance frameworks and ability to maintain evidence for audits. Good-to-Have · Exposure to AI/ML workflows, particularly OCR-based document processing, unstructured-to-structured data transformation, and AI output validation. · Hands-on experience in testing AI-driven decision systems, including model input/output validation, edge case testing, and confidence threshold verification. · Experience working in FinTech, RegTech, or compliance-heavy enterprise environments. · Knowledge of Kafka/RabbitMQ based event-driven architectures. · ISTQB Advanced Level, Certified Agile Tester, or similar certification. Who You Are · A problem-solver with an eye for detail and a mindset of continuous improvement. · A clear communicator who can align cross-functional teams toward a shared definition of quality. · A strong leader who can mentor QA engineers, build scalable testing frameworks, and own product quality end-to-end. · Someone who thrives in a startup-like, high-impact environment, and takes ownership of challenges with confidence and optimism. Why Join Zitics? · Be a part of innovation in the FinTech space, working on cutting-edge technologies. · Join a collaborative, growth-oriented culture that values continuous learning and development. · Enjoy competitive compensation, benefits, and opportunities for career advancement. How to Apply If you’re ready to take your career to the next level and make an impact, we’d love to hear from you! Send your resume to hello@zitics.com or apply directly through LinkedIn. Show more Show less

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Sanand, Ahmedabad, Gujarat

On-site

Job listing Job details Job Information Job Type Permanent Industry Non Profit Organization Management Work Experience 1-3 years Date Opened 06/19/2025 City Sanand (Travelling required) State/Province Gujarat Country India Zip/Postal Code 382110 Job Description Reporting to : State Lead and dotted line reporting to Senior Manager-Founders’ Office. The Master Trainer for the AI for All initiative in Sanand Block will be responsible for leading capacity-building efforts for teachers and ensuring high-quality delivery of the AI curriculum in classrooms. The Master Trainer will lead and facilitate structured teacher training, design contextual learning strategies, and offer ongoing mentoring.through school visits and virtual sessions. With a strong grounding in AI and STEM concepts and an empathetic approach to adult learning,the Master Trainer will bridge technical content with accessible pedagogy. This includes helping teachers simplify and adapt AI concepts for young learners, managing hands-on classroom tools, and ensuring alignment with the National Education Policy 2020 and National Curriculum Framework 2023. Role overview : Teacher Training and Support Conduct structured training sessions for school teachers on Basic and Advanced AI curriculum modules, including tools like OCR, speech-to-speech translation, object recognition and image generation. Support teachers in lesson planning and classroom delivery, simplifying AI concepts for students with minimal digital exposure. Provide continuous mentorship through school visits, calls and group sessions addressing challenges and offering practical classroom strategies. Evaluate teacher progress through informal assessments and observation, refining training content to meet evolving needs. 2. Curriculum Delivery and Adaptation Ensure AI content is delivered in an engaging, hands-on manner aligned with NEP 2020 and NCF 2023. Design and adapt classroom activities that contextualize AI through real-life examples and local relevance. Support teachers in integrating practical applications of AI into regular subjects to improve student understanding and enthusiasm. 3. Project Coordination and Monitoring Collaborate with the Project Coordinator and Field Officers to ensure smooth curriculum rollout. Participate in planning and logistics for training sessions, Chip Camps, and career awareness events. Maintain detailed school-level records including teacher participation, session feedback and learning outcomes. Conduct regular observation visits to ensure high-quality curriculum delivery and share feedback for program refinement Track teacher performance, session effectiveness, and classroom engagement using defined templates and tools. Contribute to monthly and quarterly reporting by documenting learnings, highlights and implementation challenges. Assist in generating insights for program iteration, scaling and curriculum enhancement. 4. Data Collection and Reporting Track teacher performance, session effectiveness and classroom engagement using defined templates and tools. Contribute to monthly and quarterly reporting by documenting learnings, highlights and implementation challenges. Assist in generating insights for program iteration, scaling, and curriculum enhancement. 5. Stakeholder Engagement Build and sustain relationships with school leaders, teachers and government education officials to ensure teacher participation and program continuity. Represent the training and classroom support aspects of the program in local review meetings and teacher clusters. Provide inputs to the project team on teachers and school readiness, support needs and community-level dynamics. 6. Resource and Content Support Guide effective use of AI toolkits, tablets, and digital materials provided to schools. Support teachers and students in using beginner-friendly, open-source AI tools, troubleshooting issues where needed. Coordinate with the project team to ensure timely delivery and availability of learning materials. Requirements The ideal candidate is someone who is: Passionate about education and emerging technologies like AI. Experienced in training and mentoring educators Adaptable to real-world classroom dynamics, especially in low-resource environments. A strong communicator in Gujarati and English Motivated by impact, relationship- building, and long-term teacher development Competencies 1. AI and STEM Knowledge: Solid understanding of foundational AI tools and their use in education; comfortable working with beginner-friendly, open-source platforms 2. Teacher Training and Mentorship: Ability to break down technical concepts and build teacher confidence, for those with limited digital experience. Skilled in designing and delivering interactive, age-appropriate training sessions aligned with curriculum objectives. 3. Communication Skills: Strong verbal and written communication in Gujarati and English, with the ability to engage with teachers effectively,students, school leaders, and internal teams. 4.Pedagogical Alignment: Understanding of classroom dynamics and ability to adjust for diverse student needs. 5. Problem Solving: Responsive to classroom-level challenges and training needs, offering creative,practical solutions tailored to low resource settings. Process Competency : 1. Monitoring and Evaluation: Skilled at tracking teacher performance and student engagement during school visits and trainings, and feeding this data into program decisions. 2.Training support and follow up: Ensures continuity of learning by offering consistent follow-up support and adjusting mentoring plans as needed. 3.Stakeholder Coordination: Effectively communicates with school staff,education officials, and internal teams to ensure smooth implementation. 4.Resource Facilitation: Guides and supports appropriate use of AI toolkits and digital resources; addresses common challenges Personal Attributes : 1. Empathetic and Approachable: Builds trust and rapport with teachers; sensitive to diverse levels of digital familiarity and classroom confidence. 2. Adaptable : Able to modify training techniques and resources based on school conditions and teacher needs. 3.Passionate about Education and Technology: Committed to improving AI and STEM access in public schools and enabling practical learning. 4.Proactive Problem-Solver: Takes initiative to address gaps in delivery or training and offers grounded, practical solutions. 5.Well-Organized: Balances multiple responsibilities like training, mentoring and reporting with clear documentation and structured planning.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

AI Engineer & Automation Expert 📍 Location: Banjara Hills, Hyderabad 💼 Employment Type: Full-time / Contract-to-Hire 💰 Compensation: Competitive + Performance-Based Incentives About MetaNova AI MetaNova AI is building advanced AI-powered agents and automation systems across multiple real-world use cases. We’re solving practical problems with scalable solutions that combine conversational AI, data pipelines, API integrations, and real-time intelligence. 🔗 Learn more: www.MetaNovaAI.com Role Overview We’re looking for a highly technical AI Engineer and Automation Expert with experience in Make.com, n8n , and other automation tools to help design and deploy AI agents across multiple projects. You'll be building agent workflows from scratch, connecting APIs, orchestrating logic, and deploying high-uptime, modular automations that actually work in production. This role requires strong hands-on experience with automation frameworks, data flow design, and real-world agent building. 🛠 Responsibilities Build advanced agent workflows using Make.com, n8n, LangChain, or equivalent platforms Design multi-step automations involving OpenAI, Google APIs, WhatsApp, webhooks, voice APIs, databases, CRMs, and form tools Integrate AI models (GPT-4, Whisper, Vision models, etc.) into workflow logic Connect and manage external APIs, handle webhooks, and structure modular pipelines Develop and maintain ETL pipelines to extract, transform, and load structured data from various sources (PDFs, voice, text, JSON, etc.) Collaborate with cross-functional teams to turn use cases into working AI agents Build monitoring, logging, and fail-safe mechanisms to ensure system stability Work with large datasets and optimize flows for speed and performance 🧠 Preferred Experience 2–4 years of experience in AI automation, integrations, or workflow engineering Hands-on experience with n8n, Make, Pipedream, LangChain , or similar orchestration tools Proven experience building and deploying multi-step AI agents Strong understanding of API integration , event-driven design , and automation logic Experience with ETL pipelines , especially in low-code or Python-based environments Exposure to LLM orchestration , semantic search , PDF/voice transcription , and classification tasks Bonus: Experience with computer vision , CCTV-based tracking, or object detection Tech Stack You Should Be Comfortable With Tools: n8n, Make, LangChain, Python, Pipedream, Airtable, Twilio, Google Cloud, OpenAI, Whisper API, OCR APIs Data: JSON, REST APIs, SQL/NoSQL , basic ETL concepts Deployment: Webhook-based triggers, cron flows, modular workflows 🧩 Soft Skills You can think like an engineer , but build like a product hacker You're fast, reliable, and prioritize working outcomes over theory You can read documentation, solve integration problems, and debug with minimal handholding You care about clean logic, reusability, and uptime To Apply Send your resume + 2–3 examples of past automation projects (or GitHub/Notion link). Subject: Application – AI Automation Engineer This is a hands-on builder role . If you're excited by automating workflows, connecting AI to business processes, and building powerful agents that save time and solve real-world problems — we want to hear from you. 🔗 www.MetaNovaAI.com Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! At Adobe, you will be immersed in an exceptional work environment that is recognized around the world. You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where ongoing feedback flows freely. If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer. About the Team: Our team has built World’s best-embedded and host technologies for printing. And our customers include leading MFP and Printer manufacturers. Our print technologies span multiple print segments: Graphic Arts, Digital Printing, Wide Format, and Office Printing. No matter whether people “Print for Earning” (Big Presses printing - Magazines, News Papers, Banners, Packages, etc.) or they “Print because they Earn” (Office & Home printers), they most probably use our print technology. Work involves deep domain (PDF, PostScript etc.), but it also spreads across multiple platforms and operating systems. We also own patented Scan technology that generates high quality, intelligent, searchable, reflowable, compact, secure PDFs from color or b/w scanned images. All kind of digital security is implemented in both Print & Scan workflows. Responsibilities: The engineer would contribute extensively in analysis, design, and programming for major and dot software releases. The role would from time to time require to collaborate with product marketing to evaluate and resolve new features to be added. Should be a proactive self-starter who can develop methods, techniques, and evaluation criterion for attaining results. A specialist on one or more platforms and knowledgeable of cross-platform issues, products, and customer requirements. You would contribute significantly towards the development and application of sophisticated concepts, technologies, and expertise within the team. Review and provide feedback on features, technology, architecture, designs and creative problem solving You would be required to address broad architecture and design issues of future products or technologies and provide strategic direction in evaluating new technologies in their area of expertise Domain: Print workflows (Postscript, PDF, Graphics, Color, Font, etc.) Scan (OCR, Compression, Digital Security, etc.) Required skills: B.Tech / M.Tech in Computer Science & Engineering from an outstanding institute. 1 to 2 years of hands-on design/development experience. Strong C/C++ coding background Proficiency in data structures and algorithms Platforms: Windows, Linux, Embedded (Intel/ARM) Tools: Visual Studio, GCC, CMake, Valgrind, Helgrind, Callgrind Good understanding of object-oriented design. Should have excellent computer science fundamentals Must have excellent communication skills. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities Build robust document data extraction pipelines using NLP and OCR techniques Develop and optimize end-to-end workflows for parsing scanned/image-based documents (PDFs, JPGs, TIFFs) and structured files (MS Excel, MS Word). Leverage LLM models (OpenAI GPT, Claude, Gemini etc.) for advanced entity extraction, summarization, and classification tasks. Design and implement Python-based scripts for parsing, cleaning, and transforming data. Integrate with Azure Services for document storage, compute, and secured API hosting (e.g., Azure Blob, Azure Functions, Key Vault, Azure Cognitive Services). Deploy and orchestrate workflows in Azure Databricks (including Spark and ML pipelines). Build and manage API calls for model integration, rate-limiting, and token control using AI gateways. Automate results export into SQL/Oracle databases and enable downstream access for analytics/reporting. Handle diverse metadata requirements, and create reusable, modular code for different document types. Optionally visualize and report data using Power BI and export data into Excel for stakeholder review. Technical Skills Required Skills & Qualifications: Strong programming skills in Python (Pandas, Regex, Pytesseract, spaCy, LangChain, Transformers, etc.) Experience with Azure Cloud (Blob Storage, Function Apps, Key Vaults, Logic Apps) Hands-on with Azure Databricks (PySpark, Delta Lake, MLFlow) Familiarity with OCR tools like Tesseract, Azure OCR, AWS textract, or Google Vision API Proficient in SQL and experience with Oracle Database integration (using cx_Oracle, SQLAlchemy, etc.) Experience working with LLM APIs (OpenAI, Anthropic, Google, or Hugging Face models) Knowledge of API development and integration (REST, JSON, API rate limits, authentication handling) Excel data manipulation using Python (e.g., openpyxl, pandas, xlrd) Understanding of Power BI dashboards and integration with structured data sources Nice To Have Experience with LangChain, LlamaIndex, or similar frameworks for document Q&A and retrieval-augmented generation (RAG) Background in data science or machine learning CI/CD and version control (Git, Azure DevOps) Familiarity with Data Governance and PII handling in document processing Soft Skills Strong problem-solving skills and an analytical mindset Attention to detail and ability to work with messy/unstructured data Excellent communication skills to interact with technical and non-technical stakeholders Ability to work independently and manage priorities in a fast-paced environment Show more Show less

Posted 1 month ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

As a member of the Accounting team, the Accounts Payable Coordinator will operate in a high transaction environment by appropriately accounting for supplier invoice activity in Workday Financials. This role works closely with Accounting, Procurement, and the business to ensure accurate, complete and timely processing of supplier invoices and payments. The ideal candidate can undertake a variety of tasks and work diligently under pressure. They are comfortable working with high attention to detail and incorporating new and effective ways to achieve better results. What You’ll Do Process invoices & check requests, including entry, matching to approved purchase orders, and monitoring electronic exceptions and automated OCR entry Review submitted expense reports for appropriate support with business rules Confirm and verify payment dates Verify sales tax amounts Create new suppliers and managing supplier changes with appropriate support and approvals Coordinate and prepare weekly check runs Responsible for Month-end A/P accruals Ensure set controls are met for duplicate payments and overcharges What You’ll Bring At least 2-5 years of A/P experience in a high transaction environment, processing 1,000 invoices a month Experience in Microsoft Office Experience with Workday Financials preferred Strong attention to detail as well as excellent verbal and written communication skills Able to manage self-study training, including the ability to explore existing business operations and procedures as learning materials Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law. R0012702 Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Greater Bhopal Area

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Chandigarh, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Mysore, Karnataka, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Patna, Bihar, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Surat, Gujarat, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies