Home
Jobs

60722 Python Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Analyzing and processing data, building and maintaining models and report templates, and developing dynamic, data-driven solutions. Makes recommendations for key business partners and senior management or communicate conclusions from complex analytical solutions to a wide range of audiences. Leverages analytical tools to provide business and technical support for the analytics process, tools and applications for a business function or business unit. Conceptualizing, developing and continuously optimizing analytical solution for operations and executive management to enable data driven decision making. Provides support to business users for mining and interpretation of warehoused and operational data. Experience in analytics modelling/scripting tools such as Python, Hadoop, and SQL. Lead and review data analytics preparation and finalization with the ability to develop and interpret the relevant business requirements. Ensure that data analytics assessments are accurate and completed on time per project milestones. Train qualified teammates to perform the various data analytic activities. Manage relationships with project stakeholders, establishing mutual understanding and strategic direction for solutioning. Partner with key stakeholders on enhancement projects that improve process efficiency, documentation standards and control effectiveness. Ability to communicate findings / recommendations to executive management in concise and effective manner leveraging MS PowerPoint. Skills Required RoleSenior associate - data analytics Industry TypeITES/BPO/KPO Functional AreaITES/BPO/Customer Service Required Education Graduation Employment TypeFull Time, Permanent Key Skills HADOOP POWER BI PYTHON SQL Other Information Job CodeGO/JC/384/2025 Recruiter NamePrernaraj Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Job Opportunity: Teachers Required for Java, Python, Mathematics & Physics 📅 Job Type: Part-Time / Full-Time (Based on availability and expertise) We are looking for dynamic, passionate, and knowledgeable educators to join our academic team and deliver high-quality instruction in the following subjects: 1. Java Teacher Strong foundation in core Java concepts (OOPs, Collections, Exception Handling, Multithreading, etc.) Experience with GUI frameworks (like JavaFX or Swing) is a plus. Ability to explain real-world applications and project-based learning. Teaching experience for academic or professional Java certification courses preferred. 2. Python Teacher Thorough knowledge of Python programming, including libraries like NumPy, pandas, matplotlib. Understanding of real-world applications such as data science, automation, or web development (Django/Flask). Ability to simplify complex logic for beginners and advanced learners. Prior teaching or mentoring experience will be highly appreciated. 3. Mathematics Teacher Capable of teaching school-level (CBSE/ICSE/State), college-level (B.Sc., B.Tech), and competitive exam maths (IIT-JEE, Olympiads, etc.). Must be well-versed in algebra, calculus, statistics, geometry, number theory, and logic. Ability to handle concept-based and problem-solving-oriented teaching styles. A strong academic background (M.Sc./Ph.D. preferred). 4. Physics Teacher Sound knowledge of both theoretical and applied physics. Capable of teaching school boards, entrance exams (NEET, JEE), and undergraduate physics (mechanics, thermodynamics, electromagnetism, etc.). Experience in lab-based explanation and conceptual visualization is an added advantage. Passionate about simplifying complex scientific ideas. ✅ Who Can Apply? Teachers, college professors, subject matter experts, or professionals looking to teach. Individuals with excellent communication skills and passion for teaching. Freshers with strong academic background are also encouraged to apply. 📲 To Apply: Send your CV via WhatsApp at 8981679014 For any queries , feel free to call or WhatsApp at 8981679014 Help shape the future by teaching the minds of tomorrow! Show more Show less

Posted 19 hours ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

**Job Description: Team Leader - Development** (www.caerusitconsulting.com) Location: Kolkata India Full time Shift Timing: 10:30 to 19:30 Working Days : Monday to Friday (might have to work occasionally on Saturdays to meet deadlines) Salary Range: 12 to 25 LPA Interview Process : 2 Technical Rounds and 1 HR round Ready to Hire from Out of Kolkata Candidates : Yes **Position Overview:** We are seeking a highly skilled and experienced Team Leader for Development to lead and manage our development teams. The ideal candidate will possess a strong technical background, exceptional leadership abilities, and a proven track record of successfully managing both local and remote teams. This role involves overseeing all aspects of software development, ensuring adherence to best practices, and fostering collaboration across teams to deliver high-quality solutions. **Key Responsibilities:** 1. **Team Management:** - Lead, mentor, and manage a team of developers, including local and remote members. - Foster a collaborative and inclusive team environment to ensure high levels of engagement and productivity. - Conduct regular performance evaluations and provide constructive feedback. 2. **Project Oversight:** - Oversee the planning, execution, and delivery of software development projects. - Collaborate with stakeholders to define project scope, objectives, and timelines. - Monitor project progress and address any roadblocks to ensure timely delivery. 3. **Technical Leadership:** - Create and review Technical and Functional Design Requirements documents. - Ensure adherence to coding standards, best practices, and industry guidelines. - Conduct code reviews to maintain high-quality standards and identify areas for improvement. 4. **Communication:** - Facilitate effective 360-degree communication between team members, stakeholders, and leadership. - Act as a liaison between technical teams and non-technical stakeholders to ensure alignment on project goals and requirements. 5. **Process Improvement:** - Establish, implement, and continuously refine best practices and coding standards. - Promote Agile methodologies and Scrum practices to optimize team workflows. - Identify opportunities for process improvements and drive initiatives to enhance team efficiency. 6. **Requirements Management:** - Collaborate with stakeholders to gather and scope project requirements. - Translate business needs into actionable technical specifications. 7. **Additional Responsibilities:** - Serve as a Scrum Master when necessary, facilitating Agile ceremonies and removing impediments. - Stay up-to-date with emerging technologies and trends to drive innovation within the team. 8. **Qualifications and Skills:** - At least 10 years of experience leading development teams in a fast-paced environment. - Excellent technical knowledge and exhaustive hands-on experience with Core Java, Python and NodeJS . - Hands-on experience with AI/ML projects and familiarity with OpenAI API will be preferred. - Proven experience managing local and remote teams. - Strong understanding of software development lifecycle ( SDLC) and Agile methodologies . - Proficiency in creating and reviewing Technical and Functional Design Requirements documents. - Excellent communication and interpersonal skills for effective 360-degree communication. - Demonstrated ability to scope and manage project requirements effectively. - Hands-on experience with implementing best practices and coding standards. - Scrum Master certification or experience is a strong plus. - Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. **What We Offer:** - A dynamic and collaborative work environment. - Opportunities for professional growth and development. - Competitive salary and benefits package. - The chance to work with cutting-edge technologies and drive impactful projects. If you’re a results-oriented leader with a passion for driving technical excellence and team success, we’d love to hear from you. Apply today to join Show more Show less

Posted 19 hours ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Greetings from Zebu Animation Studios! We are seeking a skilled and detail-oriented Python Developer to join our dynamic team. Location: India Employment Type: Full-Time Why Join Us? ● Innovative Environment: Work on cutting-edge projects with a talented and motivated team. ● Growth Opportunities: Access to professional development programs, mentorship, and career advancement. ● Collaborative Culture: A supportive workplace that values knowledge sharing, creativity, and continuous learning. ● Competitive Compensation: Market-aligned salary and benefits package Job Type: Full-time Pay: From ₹20,000.00 per month Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 19 hours ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? This position will lead multiple R&D teams that are developing a portfolio of enterprise grade and cloud scale products. We are seeking for a Senior Fraud Analy st to join our dynamic Fraud Analytics team . The ideal candidate will analyze financial institution data, assess fraud risks and enhance detection strategies to combat fraudulent activities. How will you make an impact? Analyze and validate financial institutions' data to identify potential fraud risk indicators Perform statistical analysis for fraud prevention products Assess real-time transactions, alerts and fraud labels to identify potential fraud Identify fraud trends and patterns to enhance detection strategies Collaborate with product management and engineering teams to improve fraud controls Generate reports on fraud incidents, losses, and risk mitigation effectiveness Provide domain expertise and business consultancy for internal and external stakeholders Support sales opportunities Have you got what it takes? Bachelor’s degree in Data Analytics, Industrial Engineering, Computer Science or Finance 4-7 years of experience in fraud analysis, risk management or financial crime investigation Strong business analysis skills with the ability to translate business requirements into product features Proficiency in statistical analysis using SQL or Python Strong written and verbal communication skills in English Ability to work independently, learn quickly and solve problems effectively Solid presentation skills Preferred qualifications: Prior experience with fraud prevention techniques Background in external consulting or professional services Master’s degree in a relevant field What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID:6782 Reporting into: Tech Manager Role Type: Senior Analyst About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 19 hours ago

Apply

14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Department: Technology Location: Pune Description Are you passionate about building test automation that accelerates product excellence? Do you believe that smart QA practices empower developers and elevate user experiences? Join Scan-IT as a Software Testing Manager! We’re seeking a detail-oriented and forward-thinking Software Testing Manager to lead our QA efforts with a strong focus on test automation, especially using tools like Testim.io. This is a unique opportunity to scale a robust quality engineering culture across our global software teams. We’re a technology company with global reach – active in 35+ countries across 3 continents. From Barcelona to Singapore, our digital solutions support the logistics networks that keep the world moving. Backed by a strong financial foundation and a culture built on trust, innovation, and opportunity, we offer the stability of a well-established business with the energy of a growing international tech team. Bring your leadership, strategy, and hands-on experience – and help us raise the bar for quality across all touchpoints. What You'll Do… Own QA Strategy: Define and evolve the company-wide testing and QA automation strategy. Lead Automation Implementation: Drive the adoption and optimization of automation tools, especially Testim.io, across web and interface testing pipelines. Build and Mentor QA Teams: Grow and mentor a global team of 25+ QA engineers, instilling strong testing practices and a quality-first mindset. Ensure High Coverage : Define test plans, manage execution across integration, regression, and performance testing. Collaborate Cross-Functionally : Partner with DevOps, Engineering, and Product teams to ensure test coverage and quality gates are built into the CI/CD pipeline. Champion Tools & Standards : Promote scalable test frameworks, reusable components, and automated scripts. Monitor and Report : Analyze test metrics, identify gaps, and continuously improve QA processes. Documentation & Training: Maintain comprehensive documentation using tools like Document360 and deliver internal training on test methodologies and tooling. What You’ll Need… Bachelor’s degree in Computer Science, Engineering, or a related field. 14+ years of professional experience in software quality assurance or engineering. 8+ years of experience leading QA teams or managing automation initiatives. Deep knowledge of automation tools; hands-on experience with Testim.io is required. Familiarity with scripting languages like JavaScript or Python for custom test scenarios. Understanding of testing strategies across APIs, microservices, and UI. Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI. Familiarity with Agile development and project management tools (e.g., JIRA, Confluence). Strong analytical mindset, problem-solving skills, and effective communication abilities. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Here’s What We Offer… At Scan-IT, we pride ourselves on our vibrant and supportive culture. Join our dynamic, international team and take on meaningful responsibilities from day one. Innovative Environment: Explore new technologies in the transportation and logistics industry. Collaborative Culture: Work with some of the industry’s best in an open and creative environment. Professional Growth: Benefit from continuous learning, mentorship, and career advancement. Impactful Work: Enhance efficiency and drive global success. Inclusive Workplace : Enjoy hybrid work opportunities and a supportive, diverse culture. Competitive Compensation: Receive a salary that reflects your expertise. Growth Opportunities: Achieve your full potential with ample professional and personal development opportunities. Join Scan-IT and be part of a team that’s shaping the future of the transportation and logistics industry. Visit www.scan-it.com.sg and follow us on LinkedIn, Facebook and X. Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

0.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Job Title : Full Stack Developer Location: Mohali Punjab. Overview We are seeking a skilled Full Stack Developer to join our dynamic team. The ideal candidate will have a strong foundation in both front-end and back-end technologies, with a passion for building scalable and efficient web applications. This role offers the opportunity to work on diverse projects and contribute to the development of innovative solutions. Key Responsibilities Front-End Development: Design and implement user-friendly interfaces using HTML, CSS, JavaScript, and modern frameworks like React, Angular, or Vue.js. Back-End Development: Develop and maintain server-side logic, databases, and APIs using languages such as Node.js, Python, Java, or Ruby. Database Management: Design and manage relational and NoSQL databases like MySQL, PostgreSQL, or MongoDB, ensuring data integrity and performance. API Integration: Build and integrate RESTful APIs to enable seamless communication between front-end and back-end systems. Version Control: Utilize Git for version control, ensuring collaborative and efficient code management. Testing and Debugging: Conduct thorough testing and debugging to ensure application functionality and performance. Deployment and Maintenance: Oversee the deployment process and provide ongoing maintenance and updates to applications. Collaboration: Work closely with cross-functional teams, including designers, product managers, and other developers, to deliver high-quality software solutions. Qualifications Education: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: Proven experience as a Full Stack Developer or similar role, with a strong portfolio of web applications. Technical Skills: Proficiency in front-end technologies: HTML, CSS, JavaScript, and frameworks like React, Angular, or Vue.js. Strong back-end development skills with Node.js, Python, Java, or Ruby. Experience with database management systems: MySQL, PostgreSQL, MongoDB. Familiarity with version control systems, particularly Git. Understanding of RESTful API design and integration. Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work independently and as part of a team. Attention to detail and a commitment to delivering high-quality code. Preferred Qualifications Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and DevOps practices. Knowledge of containerization technologies like Docker and Kubernetes. Understanding of Agile development methodologies. Work from Office. Job Type: Full-time Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Paid sick time Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 19 hours ago

Apply

1.0 - 3.0 years

2 - 4 Lacs

Coimbatore

Work from Office

Naukri logo

We are hiring a Quality Engineer with experience in SDLC, STLC, and Manual Testing. The role requires flexibility to work as both QE and SE. Participate actively in Agile & scrum team Collaborate with developers to identify and resolve issues early Required Candidate profile Minimum 1 to 3 years experience in Manual Testing Should have exposure in Agile methodology & Scrum Exposure to Java & Python is a plus Familiarity with file formats like XML, CSV, X12

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Mumbai

Work from Office

Naukri logo

We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Ranchi, Jharkhand, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About The Opportunity We operate at the forefront of India’s Artificial Intelligence & Enterprise Software Solutions sector, building production-grade, large-language-model (LLM) applications that power real-time search, recommendation, and decision-support systems for Fortune-500 clients. Our fully remote engineering pods in Mumbai and Pune transform cutting-edge GenAI research into scalable business value while nurturing a culture of ownership, learning, and rapid iteration. Role & Responsibilities Design and ship GenAI products that fuse Retrieval-Augmented Generation (RAG) with LangChain/LangGraph pipelines for chatbots, semantic search, and agentic workflows. Implement vector-based retrieval by orchestrating FAISS-backed indexes, chunking strategies, and prompt-engineering playbooks that boost LLM precision and recall. Prototype and harden ML models (classification, regression, clustering) in Scikit-learn or PyTorch, then productionise via micro-checkpointing (MCP) and CI/CD. Instrument agentic behaviours that call external tools/APIs, manage memory, and evaluate reasoning traces for safety and ROI. Collaborate cross-functionally with product, design, and MLOps to translate business stories into measurable AI metrics and A/B experiments. Author technical docs & knowledge share to uplevel team expertise in GenAI best practices and responsible-AI compliance. Skills & Qualifications Must-Have 3–7 yrs hands-on experience building LLM-powered applications with LangChain and/or LangGraph. Proven mastery of FAISS (or Pinecone/Weaviate) for vector search, plus solid understanding of embeddings and cosine-similarity maths. Strong foundation in machine-learning algorithms—classification, regression, and model evaluation—with production code in Scikit-learn or equivalent. Ability to craft, debug, and optimise prompt engineering & chunking strategies that minimise token cost while maximising answer quality. Fluency in Python; familiarity with software-engineering best practices (Git, unit tests, Docker, MCP-style model checkpoints). Excellent written and verbal communication skills to explain complex GenAI concepts to technical and non-technical stakeholders. Preferred Experience designing agentic frameworks (tool-calling, planning-&-execution loops, reflection) for autonomous task chains. Prior contribution to open-source GenAI libraries or research publications. Exposure to data-pipeline tooling such as Airflow, Spark, or cloud-agnostic serverless runtimes. Skills: GenAI,LangChain,LLM,LangGraph,FAISS,MCP,Agentic,Machine Learning,Classification,Regression,ScikitLearn Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Amritsar, Punjab, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

This is a key position supporting client organization with strong Analytics and data science capabilities. There is significant revenue and future opportunities associated with this role. Job Description: Develop and maintain data tables (management, extraction, harmonizing etc.) using GCP/ SQL/ Snowflake etc. This involves designing, implementing, and writing optimized codes, maintaining complex SQL queries to extract, transform, and load (ETL) data from various tables/sources, and ensuring data integrity and accuracy throughout the data pipeline process. Create and manage data visualizations using Tableau/Power BI. This involves designing and developing interactive dashboards and reports, ensuring visualizations are user-friendly, insightful, and aligned with business requirements, and regularly updating and maintaining dashboards to reflect the latest data and insights. Generate insights and reports to support business decision-making. This includes analyzing data trends and patterns to provide actionable insights, preparing comprehensive reports that summarize key findings and recommendations, and presenting data-driven insights to stakeholders to inform strategic decisions. Handle ad-hoc data requests and provide timely solutions. This involves responding to urgent data requests from various departments, quickly gathering, analyzing, and delivering accurate data to meet immediate business needs, and ensuring ad-hoc solutions are scalable and reusable for future requests. Collaborate with stakeholders to understand and solve open-ended questions. This includes engaging with business users to identify their data needs and challenges, working closely with cross-functional teams to develop solutions for complex, open-ended problems, and translating business questions into analytical tasks to deliver meaningful results. Design and create wireframes and mockups for data visualization projects. This involves developing wireframes and mockups to plan and communicate visualization ideas, collaborating with stakeholders to refine and finalize visualization designs, and ensuring that wireframes and mockups align with user requirements and best practices. Communicate findings and insights effectively to both technical and non-technical audiences. This includes preparing clear and concise presentations to share insights with diverse audiences, tailoring communication styles to suit the technical proficiency of the audience, and using storytelling techniques to make data insights more engaging and understandable. Perform data manipulation and analysis using Python. This includes utilizing Python libraries such as Pandas, NumPy, and SciPy for data cleaning, transformation, and analysis, developing scripts and automation tools to streamline data processing tasks, and conducting statistical analysis to generate insights from large datasets. Implement basic machine learning models using Python. This involves developing and applying basic machine learning models to enhance data analysis, using libraries such as scikit-learn and TensorFlow for model development and evaluation, and interpreting and communicating the results of machine learning models to stakeholders. Automate data processes using Python. This includes creating automation scripts to streamline repetitive data tasks, implementing scheduling and monitoring of automated processes to ensure reliability, and continuously improving automation workflows to increase efficiency. Requirements: 3 to 5 years of experience in data analysis, reporting, and visualization. This includes a proven track record of working on data projects and delivering impactful results and experience in a similar role within a fast-paced environment. Proficiency in GCP/ SQL/ Snowflake/ Python for data manipulation. This includes strong knowledge of GCP/SQL/Snowflake services and tools, advanced SQL skills for complex query writing and optimization, and expertise in Python for data analysis and automation. Strong experience with Tableau/ Power BI/ Looker Studio for data visualization. This includes demonstrated ability to create compelling and informative dashboards, and familiarity with best practices in data visualization and user experience design. Excellent communication skills, with the ability to articulate complex information clearly. This includes strong written and verbal communication skills, and the ability to explain technical concepts to non-technical stakeholders. Proven ability to solve open-ended questions and handle ad-hoc requests. This includes creative problem-solving skills and a proactive approach to challenges, and flexibility to adapt to changing priorities and urgent requests. Strong problem-solving skills and attention to detail. This includes a keen eye for detail and accuracy in data analysis and reporting, and the ability to identify and resolve data quality issues. Experience in creating wireframes a nd mockups. This includes proficiency in design tools and effectively translating ideas into visual representations. Ability to work independently and as part of a team. This includes being self-motivated and able to manage multiple tasks simultaneously and having a collaborative mindset and willingness to support team members. Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

About The Opportunity We operate at the forefront of India’s Artificial Intelligence & Enterprise Software Solutions sector, building production-grade, large-language-model (LLM) applications that power real-time search, recommendation, and decision-support systems for Fortune-500 clients. Our fully remote engineering pods in Mumbai and Pune transform cutting-edge GenAI research into scalable business value while nurturing a culture of ownership, learning, and rapid iteration. Role & Responsibilities Design and ship GenAI products that fuse Retrieval-Augmented Generation (RAG) with LangChain/LangGraph pipelines for chatbots, semantic search, and agentic workflows. Implement vector-based retrieval by orchestrating FAISS-backed indexes, chunking strategies, and prompt-engineering playbooks that boost LLM precision and recall. Prototype and harden ML models (classification, regression, clustering) in Scikit-learn or PyTorch, then productionise via micro-checkpointing (MCP) and CI/CD. Instrument agentic behaviours that call external tools/APIs, manage memory, and evaluate reasoning traces for safety and ROI. Collaborate cross-functionally with product, design, and MLOps to translate business stories into measurable AI metrics and A/B experiments. Author technical docs & knowledge share to uplevel team expertise in GenAI best practices and responsible-AI compliance. Skills & Qualifications Must-Have 3–7 yrs hands-on experience building LLM-powered applications with LangChain and/or LangGraph. Proven mastery of FAISS (or Pinecone/Weaviate) for vector search, plus solid understanding of embeddings and cosine-similarity maths. Strong foundation in machine-learning algorithms—classification, regression, and model evaluation—with production code in Scikit-learn or equivalent. Ability to craft, debug, and optimise prompt engineering & chunking strategies that minimise token cost while maximising answer quality. Fluency in Python; familiarity with software-engineering best practices (Git, unit tests, Docker, MCP-style model checkpoints). Excellent written and verbal communication skills to explain complex GenAI concepts to technical and non-technical stakeholders. Preferred Experience designing agentic frameworks (tool-calling, planning-&-execution loops, reflection) for autonomous task chains. Prior contribution to open-source GenAI libraries or research publications. Exposure to data-pipeline tooling such as Airflow, Spark, or cloud-agnostic serverless runtimes. Skills: GenAI,LangChain,LLM,LangGraph,FAISS,MCP,Agentic,Machine Learning,Classification,Regression,ScikitLearn Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role Overview As a Manager-Delivery , you will be at the forefront of managing end-to-end project execution. You will collaborate with Engagement Managers, Account Delivery Managers, and client stakeholders to design, develop, and implement data-driven solutions. Your leadership will be pivotal in ensuring high-quality project delivery, building strong client relationships, and guiding a high-performance team. Key Responsibilities Project Leadership & Execution : Collaborate with internal and client teams to define business requirements and create comprehensive project plans aligned with project scope and objectives. Design effective solutions that enable clients to achieve their goals and optimize their operations. Allocate tasks to team members based on their skills and expertise, ensuring efficient resource utilization. Lead project execution, track milestones, monitor progress, and ensure the project stays within scope, timeline, and budget. Oversee and ensure the quality of deliverables across all project phases, including reports, codes, presentations, and documentation. Team Leadership & Development : Provide both technical and business guidance to team members, fostering a culture of learning and growth. Lead scrum meetings, daily stand-ups, and Weekly Business Reviews (WBR) with clients to ensure alignment on progress and deliverables. Build an environment of mutual trust and respect, encouraging experimentation and the adoption of innovative delivery approaches. Mentor team members to build a high-performance workplace, focusing on skills development and career growth. Quality & Compliance : Ensure compliance with best practices and established processes for quality assurance, including the use of checklists, coding standards, and peer reviews. Develop action plans to improve delivery scores and ensure client satisfaction with project execution. Client Engagement & Communication : Work closely with mid-management-level clients, providing clarity on the project’s progress, outcomes, and business impact. Craft and deliver compelling presentations to communicate complex data insights in an understandable way. Balance pragmatic alternatives with ideal solutions, ensuring that business priorities, deadlines, and budgets are managed effectively. Required Skills Technical Skills : Advanced knowledge of probability and statistics. Expertise in Practical Machine Learning , including awareness of key pitfalls and solutions. Intermediate proficiency in SQL and Python. Intermediate knowledge of project management methodologies and tools. Proficiency in MS Office applications : Excel, PowerPoint, and Word. Non-Technical Skills : Strong business acumen with the ability to evaluate the financial impact of decisions. Ability to storyboard presentations effectively and hold productive conversations with mid-management-level clients. Leadership : Proven ability to lead teams, balance priorities, and make data-driven decisions. People Skills : Strong capabilities in conflict resolution, empathy, communication, listening, and negotiation. Self-driven with a strong sense of ownership and accountability. Good to Have Skills Technical Skills : Advanced knowledge of project management methodologies and tools. Advanced proficiency in SQL and Python. Knowledge of advanced data science areas like time series forecasting , Bayesian data analysis , Operations Research , and domain-specific analytics such as Pricing Analytics , Media Mix Modeling , and B2B/B2C Customer Analytics . Non-Technical Skills : Experience in solution proposals , collaborating with growth, customer success, and central solutioning functions to drive business opportunities. Show more Show less

Posted 19 hours ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Overview: Games24x7 is India’s leading and most valuable multi-gaming unicorn. We’re a full-stack gaming company, offering awesome game-playing experiences to over 100 million players through our products - Rummy Circle, India’s first and largest online rummy platform, My11Circle, the country’s fastest-growing fantasy sports platform. A pioneer in the online skill gaming industry in India, Games24x7 was founded in 2006 when two New York University-trained economists Bhavin Pandya, and Trivikraman Thampy met at the computer lab and discovered their shared passion for online games. We’ve always been a technology company at heart, and over the last decade and a half, we’ve built the organisation on a strong foundation of ‘the science of gaming’, leveraging behavioural science, artificial intelligence, and machine learning to provide immersive and hyper-personalised gaming experiences to each of our players. Backed by marquee investors including Tiger Global Management, The Raine Group, and Malabar Investment Advisors, Games24x7 is leading the charge in India’s gaming revolution, constantly innovating and offering novel entertainment to players! Our 800+ passionate teammates create their magic from our offices in Mumbai, Bengaluru, New Delhi, Miami. For more information and career opportunities you may visit www.games24x7.com. Role Overview: Games24x7 is seeking a highly experienced and results-oriented Associate Director - Analytics to lead the analytics function specifically for our flagship fantasy sports product, My11Circle. This critical leadership role will be responsible for developing and executing the data strategy, building and mentoring a high-performing analytics team, and driving data-informed decisions across all aspects of the My11Circle product lifecycle – from user acquisition and engagement to monetization and retention. The ideal candidate will possess a strong analytical background, deep understanding of product analytics principles, proven experience in leading analytics teams, and a passion for the online gaming or fantasy sports domain. You will be a strategic thinker with a hands-on approach, capable of translating complex data into actionable insights that directly impact the success of My11Circle. Responsibilities: Strategic Leadership: Develop and champion the overall data and analytics strategy for the My11Circle product, aligning with business objectives and product roadmap. Define key performance indicators (KPIs) and establish robust reporting frameworks to track product performance and user behavior. Proactively identify opportunities for leveraging data to drive product innovation, user growth, and revenue optimization. Collaborate with product management, engineering, marketing, and other stakeholders to understand their data needs and provide actionable insights. Stay abreast of the latest trends and technologies in data analytics, GenAI, and the gaming industry. Team Leadership & Development: Build, mentor, and lead a team of talented data analysts and scientists dedicated to supporting the My11Circle product. Foster a data-driven culture within the team and across the broader organization. Define team roles and responsibilities, set clear performance expectations, and provide regular feedback and coaching. Promote professional development and continuous learning within the analytics team. Product Analytics & Insights Generation: Oversee the design, development, and execution of in-depth analysis on user acquisition, engagement, retention, monetization, and gameplay patterns within My11Circle. Utilize various analytical techniques (e.g., cohort analysis, segmentation, regression, A/B testing analysis) to uncover key insights and trends. Develop and maintain dashboards and reports that provide clear and actionable insights to stakeholders. Proactively identify areas of friction or opportunity within the user journey and provide data-backed recommendations for improvement. Drive the adoption of self-service analytics capabilities within the My11Circle team. Experimentation & Optimization: Partner with the product team to design and analyze A/B tests and other experiments to optimize product features, user flows, and marketing campaigns. Establish best practices for experimentation and ensure rigorous statistical analysis of results. Translate experiment findings into actionable recommendations and drive their implementation. Data Infrastructure & Governance: Collaborate with data engineering teams to ensure the availability, accuracy, and reliability of data required for My11Circle analytics. Advocate for and contribute to the development of a scalable and efficient data infrastructure. Ensure compliance with data governance policies and best practices. Explore and evaluate new data analytics tools and technologies to enhance the team's capabilities, including potential applications of GenAI for insights generation and automation. Qualifications: Bachelor's or Master's degree in a quantitative field such as Statistics, Mathematics, Computer Science, Economics, or a related 1 discipline. 8+ years of progressive experience in data analytics, with a significant focus on product analytics. 4+ years of experience leading and managing analytics teams. Deep understanding of the online gaming or fantasy sports industry is highly preferred. Strong proficiency in SQL and experience working with large datasets. Expertise in at least one data visualization tool (e.g., Tableau, Power BI, Looker). Solid understanding of statistical analysis, experimental design, and causal inference. Experience with programming languages for data analysis (e.g., Python, R) is mandatory. Excellent communication, presentation, and storytelling skills with the ability to translate complex data into clear and actionable insights for both technical and non-technical audiences. Proven ability to collaborate effectively with cross-functional teams. Strong problem-solving skills and a data-driven mindset. Experience with cloud-based data platforms (e.g., AWS, GCP, Azure) is a plus. Familiarity with GenAI tools and their potential applications in data analysis is a plus. Personal Attributes: Passion for data and its ability to drive business decisions. Strong leadership qualities with the ability to inspire and motivate a team. Strategic thinker with a hands-on approach. Excellent analytical and problem-solving skills. Strong communication and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Proactive and results-oriented. If you are a passionate and experienced analytics leader with a deep understanding of product analytics and a desire to make a significant impact on a leading fantasy sports platform, we encourage you to apply! Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Research Engineer, Applied Research (Biotech AI – Drug Discovery) About the Company Quantiphi is an award-winning AI-first digital engineering company, driven by a deep desire to solve transformational problems at the heart of businesses. Our signature approach combines groundbreaking machine-learning research with disciplined cloud and data-engineering practices to create breakthrough impact at unprecedented speed. Quantiphi has seen 2.5x growth YoY since its inception in 2013 to 3500+ team members globally. For more details, please visit our website or LinkedIn page. About the Applied Research Unit Applied Research is an R&D practice at Quantiphi focused on advancing the frontiers of AI technologies with Applied Machine Learning at its core. We ideate and build novel solutions to high-impact, cutting-edge challenges, with a focus on advanced prototyping and scalable proof of concepts. Within this unit, the AI-Accelerated Drug Discovery practice is a key pillar that aims to apply state-of-the-art AI methodologies to revolutionize the way new therapeutics are discovered and developed. We are committed to driving meaningful scientific breakthroughs by combining strong AI research with deep cross-disciplinary collaboration. Job Description Role Level: Research Engineer Work Location: India Resource Count: 2 The Role This is a unique opportunity to work on scientifically impactful problems at the intersection of AI and biotechnology within Quantiphi Applied Research team. In this role, you will work on the development of core AI models and algorithms aimed at accelerating the drug discovery process. The position focuses on advancing foundational AI techniques such as generative modeling, optimization, and reinforcement learning, applied to molecular and bio-pharmaceutical data. The position involves working with a diverse, lively, and proactive group of nerds who are constantly raising the bar on translating the latest AI research in Healthcare and Life Sciences into tangible reusable assets for the community. Hence this would require a high level of conceptual understanding, attention to detail and agility in terms of adaptation to new technologies. While prior experience in the biotech or life sciences domain is highly valued and will elevate the candidate profile, we are equally open to exceptional AI/ML researchers from other domains who are excited to explore and learn the nuances of this rapidly growing field . Please note: This is a core AI research role, not a software engineering or system integration position. We are particularly keen to engage with candidates focused on scientific AI innovation rather than application development or LLM/GenAI-centric workflows . Responsibilities Stay ahead of the AI research curve, focusing on foundational AI methodologies applicable to drug discovery and molecular design. Build rapid prototypes, conduct detailed experimental studies, and develop advanced AI models in areas such as generative modeling, reinforcement learning, graph-based learning, and molecular property prediction. Work closely with interdisciplinary teams including biologists, chemists, and life science domain experts to design scientifically sound AI approaches. Contribute to Quantiphi IP portfolio through the development of novel algorithms, proof of concepts, and potential publications. Drive thought leadership through documentation, knowledge dissemination, and participation in conferences, blogs, webinars, and publications. Publish Research papers in prestigious Conferences and Journals Requirements Must Have: Master’s degree, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Machine Learning, Applied Mathematics, or related fields. Minimum work experience required : from new graduates to 3+ yrs of research experience post graduation (in ML research) Strong foundation in AI/ML concepts with hands-on experience in model development, experimental design, and large-scale data analysis. Excellent in-depth understanding of ML concepts and the respective underlying mathematical know-how. Working knowledge of using NLP with biological sequences. Solid research mindset with a track record of working on complex AI problems—experience with drug discovery datasets is a plus but not a prerequisite. Excellent programming skills in Python, with experience using AI/ML frameworks like PyTorch or TensorFlow. Hands-on experience in developing and deploying models with various deep learning architectures in multiple ML areas like Computer-Vision, NLP, Statistics etc Ability to independently learn new scientific domains and apply AI techniques to novel bio-pharmaceutical problems. Strong communication skills with the ability to present complex ideas in an accessible format across audiences. Ability to translate abstract highlights into understandable insights in multiple knowledge-dissemination formats like blogs, presentations, paper-publications, tutorials and webinars Good to Have: Prior exposure to molecular datasets, cheminformatics, bioinformatics, or life sciences. Hands-on experience with insilico techniques in drug discovery Hands-on experience with HPC workflows with genome datasets Familiarity with generative chemistry models, graph neural networks, reinforcement learning, or multi-objective optimization. Demonstrated industry research experience will be considered as an additional bonus. Research publications in AI/ML conferences such as NeurIPS, ICML, ICLR, or relevant bioinformatics journals Experience with cloud environments like GCP or AWS and scalable model training. Strong classical education on math/physics/mechanics/CS/Engineering concepts will also be an advantage. Why Join Us? Opportunity to work at the cutting edge of AI and biotechnology, solving problems with real-world scientific impact. Exposure to interdisciplinary teams and a culture that encourages continuous learning and exploration. Contribute to an R&D environment that values curiosity, innovation, and the advancement of AI for good. Show more Show less

Posted 19 hours ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Skills & Qualifications 4+ years of experience as a Python developer with strong client communication skills and team leadership experience. In-depth knowledge of Python frameworks such as Django, Flask, or FastAPI. Strong expertise in cloud technologies (AWS, Azure, GCP). Deep understanding of microservices architecture, multi-tenant architecture, and best practices in Python development. Familiarity with serverless architecture and frameworks like AWS Lambda or Azure Functions. Experience with deployment using Docker, Nginx, Gunicorn, Uvicorn, and Supervisor. Hands-on experience with SQL and NoSQL databases such as PostgreSQL and AWS DynamoDB. Proficiency with Object Relational Mappers (ORMs) like SQLAlchemy and Django ORM. Demonstrated ability to handle multiple API integrations and write modular, reusable code. Experience with frontend technologies such as React, Vue, HTML, CSS, and JavaScript to enhance full-stack development capabilities. Strong knowledge of user authentication and authorization mechanisms across multiple systems and environments. Familiarity with scalable application design principles and event-driven programming in Python. Solid experience in unit testing, debugging, and code optimization. Hands-on experience with modern software development methodologies, including Agile and Scrum. Familiarity with container orchestration tools like Kubernetes. Understanding of data processing frameworks such as Apache Kafka and Spark (Good to have). Experience with CI/CD pipelines and automation tools like Jenkins, GitLab CI, or CircleCI. (ref:hirist.tech) Show more Show less

Posted 19 hours ago

Apply

6.0 years

60 - 65 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 19 hours ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Salary - 30K CTC TO 35K CTC Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities Deliver high-quality lectures on AI, Machine Learning, and Data Science. Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Show more Show less

Posted 19 hours ago

Apply

Exploring Python Jobs in India

Python has become one of the most popular programming languages in India, with a high demand for skilled professionals across various industries. Job seekers in India have a plethora of opportunities in the field of Python development. Let's delve into the key aspects of the Python job market in India:

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Python professionals in India varies based on experience levels. Entry-level positions can expect a salary between INR 3-6 lakhs per annum, while experienced professionals can earn between INR 8-20 lakhs per annum.

Career Path

In the field of Python development, a typical career path may include roles such as Junior Developer, Developer, Senior Developer, Team Lead, and eventually progressing to roles like Tech Lead or Architect.

Related Skills

In addition to Python proficiency, employers often expect professionals to have skills in areas such as: - Data Structures and Algorithms - Object-Oriented Programming - Web Development frameworks (e.g., Django, Flask) - Database management (e.g., SQL, NoSQL) - Version control systems (e.g., Git)

Interview Questions

  • What is the difference between list and tuple in Python? (basic)
  • Explain the concept of list comprehensions in Python. (basic)
  • What are decorators in Python? (medium)
  • How does memory management work in Python? (medium)
  • Differentiate between __str__ and __repr__ methods in Python. (medium)
  • Explain the Global Interpreter Lock (GIL) in Python. (advanced)
  • How can you handle exceptions in Python? (basic)
  • What is the purpose of the __init__ method in Python? (basic)
  • What is a lambda function in Python? (basic)
  • Explain the use of generators in Python. (medium)
  • What are the different data types available in Python? (basic)
  • Write a Python code to reverse a string. (basic)
  • How would you implement multithreading in Python? (medium)
  • Explain the concept of PEP 8 in Python. (basic)
  • What is the difference between append() and extend() methods in Python lists? (basic)
  • How do you handle circular references in Python? (medium)
  • Explain the use of virtual environments in Python. (basic)
  • Write a Python code to find the factorial of a number using recursion. (medium)
  • What is the purpose of __name__ variable in Python? (medium)
  • How can you create a virtual environment in Python? (basic)
  • Explain the concept of pickling and unpickling in Python. (medium)
  • What is the purpose of the pass statement in Python? (basic)
  • How do you debug a Python program? (medium)
  • Explain the concept of namespaces in Python. (medium)
  • What are the different ways to handle file input and output operations in Python? (medium)

Closing Remark

As you explore Python job opportunities in India, remember to brush up on your skills, prepare for interviews diligently, and apply confidently. The demand for Python professionals is on the rise, and this could be your stepping stone to a rewarding career in the tech industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies