Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
60 - 65 Lacs
Greater Bhopal Area
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
0.0 - 5.0 years
5 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Apply Link: https://forms.gle/VQWXfd2AjdZ9B25V8 Software Development - Instructor About NxtWave NxtWave is one of Indias fastest-growing ed-tech startups , revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally Startup Spotlight Award of the Year by T-Hub in 2023 Best Tech Skilling EdTech Startup of the Year 2022 by Times Business Awards The Greatest Brand in Education in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2 024 Forbes India 30 Under 30 for their contributions to tech education. NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news - Economic Times | CNBC | Yourstory | VCCircle About NxtWave Institute of Advanced Technologies (NIAT): NIAT is NxtWaves flagship 4 Years, on-campus program for Computer Science Education. It is designed to offer one of Indias most advanced industry-aligned curricula. Situated in the heart of Hyderabad's Tech landscape, NIATs new-age campus is surrounded by global giants like Google, Microsoft, Apple, Infosys, TCS and many more providing students with unparalleled exposure to the world of technology. At NIAT, world-class software engineers are the mentors who work hand-in-hand with students, ensuring they graduate as industry-ready professionals. With a curriculum that seamlessly integrates real-world tech requirements, NIAT prepares students to thrive in an ever-evolving tech world. NIATs 2024-2028 admissions cycle was a massive success, with all seats filling up rapidly with a long waitlist for admissions, further solidifying NIATs reputation as the premier destination for aspiring tech leaders. Know more about NxtWave: https://www.ccbp.in Know more about NIAT: https://www.niatindia.com/ Read more about us in the news - Economic Times | CNBC | Yourstory | VCCircle Job Description: At NxtWave, we believe in delivering practical, industry-relevant training that empowers students to become great developers. Our product developers are passionate about teaching, simplifying complex concepts, and creating inclusive learning environments for students. This is your chance to make a lasting impact on students who have just completed their 12th standard and are eager to excel as developers. Key Responsibilities Deliver daily in-person classroom training on programming and/or full-stack development. Design, develop, and implement learning activities, materials, and resources that align with industry standards. Provide personalized learning experiences by understanding student needs and delivering tailored support throughout the program. Actively assist and resolve student queries and issues promptly, providing mentorship and guidance. Contribute to curriculum development and improvements based on student feedback and industry trends. Continuously develop and demonstrate a teaching philosophy that inspires student learning. Review student deliverables for accuracy and quality. Handle a class size of 70-100 students, ensuring engagement and effective learning outcomes. Stay current with professional development in both pedagogy and software development practices. Requirements: A Masters degree (M.Tech) in CSE, IT,(Technical Background) will be an added advantage. Having teaching or training experience in Computer Science is an added advantage. Passion for teaching and mentoring, with a commitment to student success. Alignment with NxtWaves vision and culture. Skills Must-Have: Professional fluency in English, with excellent communication and presentation skills. Strong proficiency in Python, Java, and JavaScript programming languages. Knowledge of additional programming languages is an added advantage. Strong proficiency in Data Structures and Algorithms. Strong knowledge of object-oriented programming. Proficiency in content development using tools like Google Sheets, Google Slides, etc. (Knowledge of Microsoft 365 stack is a plus). Ability to quickly learn and use technology platforms to interact with students. Empathy, ambition, and the ability to work closely with individuals from diverse backgrounds and cultures. Good to Have Familiarity with Git and version control systems. Strong knowledge of the subject matter, industry standards, and best practices in software development. Ability to adapt teaching methods to various learning styles & requirements. Strong problem-solving and solution-seeking mindset. Openness to constructive feedback and continuous improvement. A sense of ownership, initiative, and drive for delivering high-quality teaching outcomes. Job Overview: Working days: 6 days a week Type of employment: 2 Months Training + Employee CTC: Up to 25,000 Rs During Training + 5.6LPA - 10 LPA (After training based on the performance)
Posted 23 hours ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview Job Summary: We are looking for a skilled Power BI Analyst with at least 3 years of experience in Power BI visualizations and a deep understanding of SQL. The ideal candidate will be responsible for creating interactive and insightful dashboards, optimizing data models, and ensuring data accuracy for business decision-making. This role requires strong analytical skills, business acumen, and the ability to transform complex datasets into meaningful insights. Key Responsibilities Power BI Development & Visualization Design and develop interactive dashboards and reports in Power BI that provide actionable insights to business users. Optimize data models, measures, and DAX calculations for efficient performance and accurate reporting. Create visually compelling charts, graphs, and KPIs to enhance decision-making across various business functions. Ensure the accuracy and consistency of reports by implementing data validation and cleansing techniques. Work closely with stakeholders to understand business requirements and translate them into impactful data visualizations. SQL & Data Management Write and optimize complex SQL queries to extract, manipulate, and analyse large datasets from multiple sources. Ensure data integrity by troubleshooting and resolving SQL-related issues. Assist in data modelling and ETL processes to improve the efficiency of data pipelines. Work with relational databases like SQL Server, PostgreSQL, MySQL, Snowflake, or Vertica. Collaboration & Stakeholder Management Partner with business teams to gather reporting needs and translate them into data-driven insights. Provide training and support to business users on Power BI dashboard usage. Work closely with data engineers, analysts, and IT teams to enhance data availability and quality. Required Qualifications & Experience: 3+ years of experience in Power BI development with strong expertise in DAX and Power Query. Proficiency in SQL with the ability to write and optimize complex queries. Strong understanding of data visualization best practices and dashboard performance optimization. Hands-on experience working with large datasets and relational databases. Experience integrating Power BI with different data sources (SQL Server, APIs, Excel, Cloud Data Warehouses, etc.). Preferred Experience with ETL tools, data modelling, and data warehousing concepts. Knowledge of Python or R for advanced data analysis (nice to have). Exposure to cloud platforms like Azure, AWS, or Google Cloud for data processing. Understanding of business intelligence (BI) and reporting frameworks. Skills & Competencies Power BI Mastery – Expert in building interactive dashboards, reports, and data visualizations. SQL Expertise – Ability to handle complex queries and optimize database performance. Problem Solving – Strong analytical and critical thinking skills. Communication – Ability to explain technical insights to non-technical stakeholders. Attention to Detail – Ensuring accuracy and reliability in reporting. Business Acumen – Understanding business needs and translating them into data-driven solutions. Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you. Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are looking for a Senior Software Engineer – AI with 3+ years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment—fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. Roles and Responsibilities Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems—autonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-end—from research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Skill Set: 3–6 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment—self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. Show more Show less
Posted 23 hours ago
1.0 - 4.0 years
20 - 22 Lacs
Gurugram
Work from Office
Must Have: prior experience in spend analytics/profitability analysis/cost optimization/cost reduction (no sourcing/procurement) analytics ++ good academic records Note : We do not need sourcing/procurement folks preferring from vantage/transformation/ Big 4 organizational should be ok for relocation to Delhi/NCR if out-location candidate should be open for WFO 3 days we will prefer early associates UG : 2-4 years/ PG : 0.5-2 years to avoid the expectation on the next level Preferred candidate profile Must Have Proficiency in Python is mus t intermediate will also do Strong problem solving (in guesstimates + case study + mathematical questions) Strong communication Good to Have Visualization : Good to have (Alteryx/ Power BI) Good to have a candidate who has has good aptitude/logics and reasoning
Posted 23 hours ago
10.0 - 18.0 years
15 - 30 Lacs
Pune, Bengaluru
Work from Office
Role & responsibilities AWS with Databricks infra lead Experienced in setting up the Unity Catalog s Setting out how the group is to consume the model serving processes, Developing MLflow routines, Experienced ML models, Have used Gen AI features with guardrails, experimentation, and monitoring
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Chandigarh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Mysore, Karnataka, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
0 years
0 Lacs
Mysore, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 23 hours ago
0 years
0 Lacs
Mysore, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
6.0 years
60 - 65 Lacs
Patna, Bihar, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 23 hours ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Quidich Innovation Labs is a global company headquartered in Mumbai, India that pioneers in customized technology solutions for Sports Broadcasts. From the outset, we have believed in the power of the story that sport tells as a tool to bring people together; and that this story best reaches fans, through the way it is broadcast. Building on this thinking, we have created various technology tools over the past five years and deployed them at tournaments such as the Indian Premier League, ICC Men’s T20 World Cup, ICC Women’s World Cup, and Men's FIH Hockey World Cup, to name a few. Role As a Software Developer, you will play a pivotal cross-functional role in our product team. Your main responsibility will be to optimize and maintain various components of our platform using C++. You’ll have the opportunity to work on diverse tasks such as: Optimizing back-end systems in C++. Implementing and optimizing computer vision algorithms. Modifying and extending functionalities within video card SDKs. Building and maintaining robust software development pipelines. Collaborating on general software development tasks like code review, testing, and debugging. Continuously learning and solving complex problems. This role is ideal for developers who are eager to grow, tackle complex technical challenges, and thrive in a collaborative environment. Responsibilities Designing, optimizing, and maintaining backend components and modules using C++ Building and optimizing development pipelines to ensure seamless integration and deployment Handling large volumes of data in real-time. Collaborating with cross-functional teams to implement solutions across various technical domains Working on integrating and optimizing third-party SDKs, including those related to video processing and computer vision Contributing to the full software development lifecycle, including requirement gathering, architecture, testing, and deployment Debugging, troubleshooting, and optimizing performance-critical applications Adhering to best practices in code quality, version control, and software engineering standards Proactively learn new technologies and frameworks to meet project needs. Qualifications, Skills, and Competencies Strong proficiency in C++ with a minimum of 3-5 years of professional experience Experience working with backend development, SDKs, or system-level programming. Hands-on experience with building and maintaining software pipelines for CI/CD (Continuous Integration/Continuous Delivery). Experience working with SQL/NoSQL Databases. Familiarity with computer vision algorithms and video processing is a plus. Excellent problem-solving skills with the ability to learn and adapt to new technologies quickly. Solid understanding of general software development practices, including version control (e.g., Git), testing, and debugging. Ability to work both independently and as part of a team in a dynamic environment. Knowledge of other programming languages (e.g., Python, Java) is an advantage. Cuda experience is an advantage. Experience with video card SDKs or computer vision libraries like OpenCV. Familiarity with Agile/Scrum methodologies. Strong communication and collaboration skills. Location: Mumbai Reporting To : Product Manager Joining Date : Immediate to 30 Days Interested candidates, please send your CV to careers@quidich.com Show more Show less
Posted 23 hours ago
0 years
0 Lacs
Belgaum, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 23 hours ago
0 years
0 Lacs
Belgaum, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 23 hours ago
0.0 - 2.0 years
3 - 3 Lacs
Hyderabad
Work from Office
Join our AI team as a Generative AI Engineer! Work on LLMs, prompt engineering, and model fine-tuning. Ideal for freshers or those with up to 2 years of experience in Python, ML, and deep learning. Passion for AI is a must!
Posted 23 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview: TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer II Location: Pune Job Type: Contract Work Type: Onsite Job Description: The top job Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as client's S3, AWS Glue, client's Redshift, and client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles: Ownership Deliver result Insist on the Highest Standards Mandatory Requirements: 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred skills: Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Education or Certification: Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 23 hours ago
0.0 - 1.0 years
0 Lacs
Thiruvananthapuram, Kerala
On-site
Job Requisition Document Job Title: Software Engineer – React & Ruby (Medical Platform) Location: Thiruvananthapuram, Kerala About Us:Our success is driven by our ability to consistently deliver world-class, high-quality talent, particularly in the areas of precision engineering, assembly line operations, and other skilled manpower across diverse industrial domains. Among our esteemed clients is a listed Japanese company that is set to begin its operations in Technopark, Thiruvananthapuram, further reinforcing our standing as a premier recruitment partner in the region. Job Summary: We are seeking a skilled and motivated Software Engineer to join our dynamic multinational team. This role focuses on the development and enhancement of a sophisticated medical-related platform. The ideal candidate will have strong experience in React and Ruby, with a passion for building high-quality, impactful software solutions in the healthcare domain. Responsibilities: ● Design, develop, test, deploy, and maintain robust and scalable web applications using React.js and Ruby on Rails. ● Collaborate effectively with cross-functional, multinational teams including product managers, designers, and other engineers to deliver high-quality software solutions. ● Develop and integrate user-facing elements with server-side logic. ● Build reusable components and front-end libraries for future use (React). ● Develop and maintain efficient, reusable, and reliable Ruby code. ● Ensure the technical feasibility of UI/UX designs. ● Optimize applications for maximum speed, scalability, and responsiveness. ● Implement security and data protection measures. ● Participate in code reviews to maintain code quality and share knowledge. ● Troubleshoot, debug, and upgrade existing software, ensuring platform stability and performance. ● Integrate data storage solutions, including databases. ● Contribute to all phases of the software development lifecycle, from concept and design to testing and deployment. ● Stay updated with emerging technologies and industry best practices. Mandatory Technical Skills, Experience: 1 to 5 Years relevant experience ● Proven experience as a Software Engineer or similar role. ● Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model. ● Thorough understanding of React.js and its core principles (e.g., components, state, props, hooks). ● Experience with popular React.js workflows (such as Flux or Redux). ● Strong proficiency in Ruby and the Ruby on Rails framework. ● Solid understanding of object-oriented programming. ● Experience with front-end technologies such as HTML5, CSS3, and responsive design. ● Familiarity with RESTful APIs and web services. ● Experience with database technologies (e.g., PostgreSQL, MySQL, MongoDB). ● Proficient understanding of code versioning tools, such as Git. ● Familiarity with modern front-end build pipelines and tools. ● Experience with automated testing suites and TDD/BDD principles. ● Understanding of agile development methodologies. Additional (Nice to have) Skills: ● Experience with Swift programming. ● Experience working on medical-related platforms or within the healthcare industry (familiarity with standards like HIPAA, FHIR is a plus). ● Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). ● Knowledge of other back-end languages (e.g., Python, Node.js). ● Experience with containerization technologies like Docker and orchestration tools like Kubernetes. ● Understanding of CI/CD pipelines. Behavioral Skills (1st 3 skills below are mandatory only for Senior role): ● Leadership Potential: Demonstrated ability or strong potential to guide and support a small team, fostering a collaborative and productive environment. This includes providing guidance, mentoring junior team members and delegating tasks effectively. ● Communication Excellence: Exceptional verbal and written communication skills, with the ability to clearly and concisely convey technical information to both technical and non- technical audiences, including clients. ● Client Relationship Management: Ability to build and maintain positive relationships with clients, understand their needs and expectations and proactively address any concerns. ● Problem-Solving and Analytical Thinking: Strong analytical and problem-solving skills with the ability to identify root causes of issues, evaluate different solutions and implement effective resolutions, both independently and within a team. ● Adaptability and Flexibility: Ability to adapt to changing project requirements, client demands and work environments. ● Collaboration and Teamwork: Proven ability to work effectively within a team, contributing positively to team goals, sharing knowledge and supporting colleagues. ● Ownership and Accountability: Takes ownership of assigned tasks and responsibilities, demonstrates a strong sense of accountability for delivering high-quality work within deadlines. ● Proactiveness and Initiative: Demonstrates a proactive approach to work, identifying potential issues or opportunities for improvement and taking initiative to address them. ● Professionalism and Integrity: Maintains a high level of professionalism, ethical conduct and integrity in all interactions, both internally and with clients. ● Time Management and Organization: Excellent time management and organizational skills, with the ability to prioritize tasks, manage workload effectively and meet deadlines in a fast-paced environment. Education: Bachelor's degree in Computer Science/Electronics/Electrical Engineering. Salary: Best in the Market Job Type: Permanent Location Type: In-person Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: 1. Software Engineer – React & Ruby (Medical Platform): 1 year (Required) Work Location: In person
Posted 23 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role : Sr Software Engineer(Embedded System Testing) Notice Period (Max): Immediate and 15 days Grade : 9-12 Yrs Loaction : Pune Job Type: Permanent Bachelor degree in Electrical/Electronic engineering, master degree preferred. Proficient in both in manual and automated testing frameworks, particularly PYTEST, with strong Python programming skills. Knowledge of embedded systems Hands-on experience with debugging firmware using tools like oscilloscopes, logic analyzers, or serial debuggers. Strong understanding into Communication Protocol Modbus Excellent verbal and written communication skills, with the ability to collaborate effectively across global teams. Highly motivated, quick learner, and capable of working independently. Show more Show less
Posted 23 hours ago
100.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: You will work with A multi-disciplinary squad, and will play a significant role in the design and up keeping of our businesses, customer focused business solutions and integration. Let me tell you about the role As a Senior Solution Architect, you will be responsible for connecting all the digital teams and the consumers and procurers of IT, to build a coordinated, flexible, effective IT architecture for bp's oil & gas application estate. You will also work with other data, integration and platform architects, who specialize in the respective areas, to build fit-for-purpose and multifaceted architecture. What you will deliver Architecture: You rigorously develop solution architectures, seeking practical solutions that optimize and re-use capabilities. You will be responsible for building technical designs of services or applications and will care passionately about the integrity of the IT capabilities you develop. Technology: You are an excellent technologist and have a passion for understanding and learning. You will add to digital transformation initiatives from an architectural perspective, facilitating the delivery of solutions. You will bring good hands-on skills in key technologies, and an ability to rapidly assess new technologies with a commercial approach. Data engineering and analytics: you will have the ability draw of insights from information / knowledge, spanning data analytics and data science, including business intelligence, machine learning pipelines and modelling, and other sophisticated analytics. Awareness of information modelling of data assets to their implementation in data pipelines, and the associated data processing and storage techniques. Safety and compliance: The safety of our people and customers is our highest priority. You will advocate and help ensure our architectures, designs and processes enhance a culture of operational safety and improve our digital security. Collaboration: You will play an integral role in establishing the team’s abilities while demonstrating your leadership values through delegation, motivation and trust. You will not just lead, but "do". You will build positive relationships across the business and Digital and advise and influence leaders on technology. You will act as a technology mentor within Digital teams and inspire people to engage with technology as a driver of change. You will understand the long-term needs of the solution you are developing, and enable delivery by building a rapport with team members both inside and outside of BP. What you will need to be successful (experience and qualifications) Technical Skills A Bachelor's (or higher) degree or equivalent work experience. A confirmed background in architecture with real-world experience of architecting. Deep-seated functional knowledge of key technology sets, e.g. application, infrastructure, cloud and data. Be part of a tight-knit delivery team. You accomplish outstanding project outcomes in a respectful and supportive culture. A proven grasp of architecture development and design thinking in an agile environment. You adapt delivery techniques to drive outstanding project delivery. Also capable in information architecture and data engineering / management processes, including data governance / modelling techniques and tools, processing methods and technologies. Capable in data analytics and data science architectures, including business intelligence, machine learning pipelines and modelling, and associated technologies. Desirable Skills Systems Design, Capacity Management, Network Design, Service Acceptance, Systems Development Management Programming Languages – Python, Scala, Spark variants Business Modelling, Business Risk Management, User Experience Analysis, Emerging Technology Monitoring, IT Strategy and Planning About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Additional Information We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less
Posted 23 hours ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a skilled Systems Engineer specializing in Azure and DevOps to join our team. As a Systems Engineer focusing on Azure and DevOps, you will design, build, and maintain scalable infrastructure while optimizing CI/CD pipelines. You will work closely with development teams to automate workflows and ensure system reliability. If you're ready to make an impact, we encourage you to apply. Responsibilities Design, build, and maintain scalable Azure infrastructure using Infrastructure as Code (IaC) tools Create, manage, and optimize CI/CD pipelines using GitLab CI/CD Automate configuration management and provisioning tasks using Ansible Collaborate with development teams to automate build, test, and deployment workflows Monitor systems for reliability, availability, and performance using tools like Azure Monitor, Prometheus, or Grafana Implement security best practices in CI/CD pipelines and Azure infrastructure Requirements 3-6 years of experience in DevOps roles with a strong focus on Azure cloud services Proven experience with GitLab CI/CD for pipeline creation, management, and automation Hands-on expertise with Ansible for configuration management and orchestration Experience with containerization using Docker and orchestration with Kubernetes Strong scripting skills in Bash, PowerShell, or Python Good understanding of Azure services such as VMs, Networking, Key Vault, Storage, Azure AD, etc. Experience with Infrastructure as Code tools like Terraform Show more Show less
Posted 23 hours ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : BI Architect Project Role Description : Build and design scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. Create industry and function data models used to build reports and dashboards. Ensure the architecture and interface seamlessly integrates with Accentures Data and AI framework, meeting client needs. Must have skills : SAS Base & Macros Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a SAS BASE & MACROS, you will be responsible for building and designing scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. You will create industry and function data models used to build reports and dashboards, ensuring seamless integration with Accentures Data and AI framework to meet client needs. Roles & Responsibilities: 1.Data Engineer to lead or drive the migration of legacy SAS data preparation jobs to a modern Python-based data engineering framework. 2. Should have deep experience in both SAS and Python, strong knowledge of data transformation workflows, and a solid understanding of database systems and ETL best practices. 3.Should analyze existing SAS data preparation and data feed scripts and workflows to identify logic and dependencies. 4.Should translate and re-engineer SAS jobs into scalable, efficient Python-based data pipelines. 5.Collaborate with data analysts, scientists, and engineers to validate and test converted workflows. 6.Optimize performance of new Python workflows and ensure data quality and consistency. 7.Document migration processes, coding standards, and pipeline configurations. 8.Integrate new pipelines with google cloud platforms as required. 9.Provide guidance and support for testing, validation, and production deployment Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: - The candidate should have 8+ years of exp with min 3 years of exp in SAS or python Data engineering Show more Show less
Posted 23 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Python has become one of the most popular programming languages in India, with a high demand for skilled professionals across various industries. Job seekers in India have a plethora of opportunities in the field of Python development. Let's delve into the key aspects of the Python job market in India:
The average salary range for Python professionals in India varies based on experience levels. Entry-level positions can expect a salary between INR 3-6 lakhs per annum, while experienced professionals can earn between INR 8-20 lakhs per annum.
In the field of Python development, a typical career path may include roles such as Junior Developer, Developer, Senior Developer, Team Lead, and eventually progressing to roles like Tech Lead or Architect.
In addition to Python proficiency, employers often expect professionals to have skills in areas such as: - Data Structures and Algorithms - Object-Oriented Programming - Web Development frameworks (e.g., Django, Flask) - Database management (e.g., SQL, NoSQL) - Version control systems (e.g., Git)
__str__
and __repr__
methods in Python. (medium)__init__
method in Python? (basic)append()
and extend()
methods in Python lists? (basic)__name__
variable in Python? (medium)pass
statement in Python? (basic)As you explore Python job opportunities in India, remember to brush up on your skills, prepare for interviews diligently, and apply confidently. The demand for Python professionals is on the rise, and this could be your stepping stone to a rewarding career in the tech industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.