Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
13 - 23 Lacs
Hyderabad
Work from Office
Position Overview We are seeking a motivated and experienced Senior Backend Engineer with approximately 5 years of professional experience to join our team. The ideal candidate will not only excel in building and improving software solutions but also possess strong communication skills to effectively interact with US-based clients. Key responsibilities and expectations include: Proactive Problem Solver: Self-motivated to build and improve solutions to software problems, always looking for innovative ways to enhance our systems. Continuous Learner: Eager to learn new technologies, methodologies, and languages, and open to experimenting and learning from failures. Effective Communicator: Capable of clearly and concisely communicating complex technical, architectural, and organizational issues to both technical and non-technical stakeholders, and proposing thorough, iterative solutions. Experienced Professional: Approximately 5 years of relevant experience in backend development, demonstrating a strong track record of delivering high-quality software. Test-Driven Development Advocate: Committed to practicing test-driven development (TDD) to ensure robust, maintainable, and scalable code. Client-Facing Skills: Able to effectively communicate with US clients, understanding their needs and translating them into technical requirements. Efficiency Enhancer: Recognize impediments to our efficiency as a team ("technical debt"), propose, and implement solutions. Feature Developer: Develop features and improvements to ongoing projects in a secure, well-tested, and performant way. Problem Solver: Solve technical problems of moderate scope and complexity. Independent Contributor: Confidently ship small features and improvements with minimal guidance and support from other team members. Collaborate with the team on larger projects. If you are a dedicated professional who thrives in a dynamic environment and meets the above qualifications, we would love to hear from you. Technical Stack and Skills: Languages: Typescript, NodeJS Databases: Postgres or MySQL or similar, MongoDB or similar Containerization: Docker and Docker Compose CI/CD: Jenkins, GitHub Actions, or similar Testing: Unit Testing Frameworks such as Jest or similar Methodologies: Agile methodology Cloud: AWS Cloud Experience (preferably serverless architecture)
Posted 1 day ago
8.0 - 10.0 years
11 - 18 Lacs
Bengaluru
Work from Office
Who You Are : - You are a visionary leader with a robust technical background in Microsoft .Net and related technologies, eager to shape the future of fintech solutions with a harmonious blend of project management expertise and profound technical knowledge, you stand ready to guide teams, mentor emerging talent, and spearhead innovative projects from their inception through to their triumphant realization. Your Role : - Lead and Innovate: Direct the planning, execution, and delivery of complex Microsoft .Net projects, guaranteeing high-quality results within budget, scope, and timeline constraints. - Foster Growth : Create an engaging and cohesive work environment, mentoring team members to unlock their full potential. - Mitigate Risks : Proactively identify project risks and formulate effective mitigation strategies, maintaining transparent communication with all stakeholders. - Ensure Excellence : Uphold process adherence by leveraging industry best practices and standards in software development and delivery. - Develop Talent : Supervise, coach, and cultivate your team, ensuring alignment with performance appraisal processes and fostering professional growth. - Embrace Technology : Drive strategic leadership in the adoption of new technologies, especially AI, to innovate and disrupt within the financial sector. - Desired/Recommended Technical Competencies & Skills : - .NET Core Mastery : Strong hands-on expertise in .NET Core, showcasing deep knowledge and experience in building robust, scalable applications. - Software Development Best Practices : Proficient in writing clean, maintainable code, with extensive experience in ORM, JSON, and multi-threading, ensuring high performance and scalability. - API Design and Development : Skilled in developing both RESTful and GraphQL APIs, understanding the nuances of creating highly accessible and efficient web services. - Microservices and Event-Driven Architecture : Experienced with designing and implementing microservices architectures, utilizing event-driven patterns for dynamic and responsive applications. - Containerization and Orchestration : Proficient in containerization technologies like Docker, and orchestration with Kubernetes, including service discovery and service mesh, to manage complex, scalable microservices landscapes. - Cloud Platforms : Expertise in cloud environments such as AWS/Azure, leveraging cloud services for enhanced application performance, scalability, and reliability. - Database Management : Expertise with RDBMS and NoSQL databases, understanding their application within .NET environments for optimal data storage, retrieval, and manipulation strategies. - DevOps Practices : Comprehensive understanding of DevOps practices including continuous integration and continuous delivery (CI/CD) using tools like Jenkins, and version control systems like Git, integrated within Jira for project management and Maven for dependency management. - Security Practices : Awareness of security best practices and common vulnerabilities specific to .NET development, implementing secure coding techniques to protect data and applications. - Monitoring and Logging : Adept at using tools for application monitoring, logging, and distributed tracing, ensuring high availability and identifying issues proactively. - Leadership and Communication : Exceptional leadership, communication, and project management abilities to lead diverse and geographically dispersed teams.
Posted 1 day ago
5.0 - 7.0 years
15 - 17 Lacs
Hyderabad
Work from Office
Who You Are: Technical Expertise: o Proficient in .NET Core with 5+ years of hands-on expertise, demonstrating a strong foundation in developing robust, scalable applications using .NET technologies. o Specializes in: .NET Core (Expert level): Deep knowledge in building and maintaining high-performance, server-side applications with .NET Core. Microservices (Advanced level): Experienced in designing, developing, and implementing microservices architectures, understanding the principles of autonomy, granularity, and independent scaling. RESTful/GraphQL APIs (Advanced level): Proficient in creating and managing APIs, ensuring they are secure, scalable, and performant. Cloud Environments like AWS/Azure (Intermediate level): Solid experience in leveraging cloud services for deploying, managing, and scaling applications. Skilled at writing clean, scalable code that drives innovation, emphasizing maintainability and best practices in software development. o Experience includes working with: ORM: Understanding of Object-Relational Mapping to facilitate data manipulation and querying in a database-agnostic manner. JSON: Proficient in using JSON for data interchange between servers and web applications. Event-Driven Architecture: Knowledgeable in building systems that respond dynamically to events, improving application responsiveness and scalability. Inversion of Control (IOC) and Aspect-Oriented Programming (AOP): Implementing these patterns to increase modularity and separation of concerns. Containerization: Experience with Docker or similar technologies for encapsulating application environments, enhancing consistency across development, testing, and production. Service Discovery and Service Mesh: Familiarity with managing microservices communication patterns, ensuring services are dynamically discoverable and communicable. Multi-threading: Expertise in developing applications that efficiently execute multiple operations concurrently to improve performance. o Proficient with: RDBMS and NoSQL (Intermediate level): Competent in working with relational and non-relational databases, understanding their respective use cases and optimization techniques. Jira (Advanced level) and Git (Advanced level): Advanced proficiency in project management with Jira and version control with Git, ensuring efficient workflow and code management. Maven (Intermediate level): Knowledgeable in using Maven for project build and dependency management in .NET environments. Jenkins (Intermediate level): Experienced in implementing CI/CD pipelines with Jenkins, automating the software development process for increased productivity and reliability. o Utilizes these tools and platforms effectively in the software development process, contributing to the delivery of high-quality software solutions. Analytical Thinker: A strategic thinker passionate about engaging in requirements analysis and solving complex issues through software design and architecture. Team Player: A supportive teammate ready to mentor, uplift your team, and collaborate with internal teams to foster an environment of growth and innovation. Innovation-Driven: Always on the lookout for new technologies to disrupt the norm, youre committed to improving existing software and eager to lead the charge in integrating AI and cutting-edge technologies.
Posted 1 day ago
0.0 - 3.0 years
0 Lacs
Delhi, Delhi
On-site
Job Title: QA/DevOps Engineer Experience Required: 2–3 Years Industry: Travel / Tourism / Hospitality Location: New Delhi Job Summary: We are seeking a motivated and detail-oriented QA/DevOps Engineer with 2–3 years of experience, preferably in the travel industry. The ideal candidate will be responsible for both software quality assurance and DevOps processes, ensuring efficient testing, deployment, and system reliability. You will work closely with development, product, and operations teams to deliver high-performing and seamless travel platform experiences to users. Key Responsibilities:Quality Assurance (QA): Design and execute functional, regression, integration, and performance test cases for web and mobile applications. Create and manage automated test scripts using tools like Selenium , Postman , and JMeter . Identify, document, and track bugs using tools such as Jira , and verify fixes in a timely manner. Ensure applications are user-friendly and optimized for performance across multiple devices and platforms. Collaborate with development and product teams to understand requirements and resolve defects efficiently. DevOps: Manage and monitor CI/CD pipelines using tools like Jenkins , GitLab CI , etc. Automate build, test, and deployment processes to enhance delivery speed and consistency. Monitor system performance, conduct root cause analysis, and ensure high availability and reliability of platforms. Support production deployments and manage infrastructure automation using tools such as Docker , Kubernetes , or Terraform . Assist in cloud infrastructure management (AWS, GCP, or Azure). Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 2–3 years of experience in QA and DevOps roles; experience in the travel tech domain is a strong plus. Solid understanding of QA methodologies and DevOps practices. Proficient in scripting languages such as Python , Shell , or Bash . Hands-on experience with tools like Git , Jenkins , Docker , Selenium , and AWS . Good knowledge of APIs and experience in testing RESTful services . Strong problem-solving, communication, and collaboration skills. Job Types: Full-time, Permanent Pay: ₹650,000.00 - ₹850,000.00 per year Application Question(s): How many years of experience you have in QA/ Devops ? Current Ctc ? Are you immediate Joiner ? Work Location: In person
Posted 1 day ago
3.0 years
0 Lacs
India
Remote
Platform Engineer Skills & Experience Experience level: 3+ years Tech stack used: FastAPI, Tornado, Redis, Celery, MySQL, AWS, Docker, Ansible, Linux, Python, React, OpenSearch, nginx Primary skills we consider: Python, API/Web Development Secondary skills we consider: React, Typescript, HTML5/CSS3, REST, Docker, CI/CD, AWS, ML training & deployment stacks Location: Remote About us: We're building FileSpin into the world’s most innovative, AI-enabled Digital Asset Management platform. Our mission is to deliver blazing-fast media infrastructure and delightful developer tools for teams who care about scale and performance. We're growing fast — and looking for sharp, self-driven engineers to help shape our next-generation platform. If you thrive in fast-paced environments and love solving real-world SaaS scaling challenges, let’s talk. Qualifications & Responsibilities Proficient in Troubleshooting and Infrastructure management Strong skills in Software Development and Programming Experience with Databases Excellent analytical and problem-solving skills Ability to work independently and remotely Bachelor's degree in Computer Science, Information Technology, or related field preferred Experience in ML model training and deployments is a plus Essential skills: Excellent Python Programming skills Good Experience with SQL Excellent Experience with at least one web frameworks such as Tornado, Flask, FastAPI Experience with Video encoding using ffmpeg, Image processing (GraphicsMagick, PIL) Good Experience with Git, CI/CD, DevOps tools Experience with React, TypeScript, HTML5/CSS3 “Nice to have” skills: Machine Learning and associated tools Web/Proxy servers (nginx/Apache/Traefik) SaaS stacks such as task queues, search engines, cache servers Prior experience in a startup or early-stage team Please do not apply if you You’ve never built or contributed to a cloud-based SaaS application You aren’t comfortable working independently and remotely What You'll Get High-autonomy, low-meeting culture — we trust you to do your best work Work closely with founders and senior engineers — no middle layers Continuous Learning Budget (courses, books, events) A creative, fast-paced environment where you’ll own your impact Interview Process Introductory chat Short technical screening test (code + SaaS thinking) Deep dive technical interview Culture and compensation discussion Job offer Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
India
On-site
Job Title: Java Architect Experience Required: 12+ Years Shift Timing: 2:00 PM – 11:00 PM IST Location: Employment Type: Contractual Immediate Joiners Preferred Job Summary: We are seeking a highly experienced Java Architect with 12+ years of strong hands-on experience in designing and developing scalable Java-based solutions. The ideal candidate must have a solid background in software architecture, AWS cloud services, and database design with Postgres. Additionally, experience with Angular is essential. This is a hybrid role involving both technical leadership and hands-on coding responsibilities. Key Responsibilities: Lead the architecture, design, and development of scalable and secure enterprise-grade Java applications. Make key architectural decisions in collaboration with stakeholders, including microservices architecture, integration patterns, and deployment strategies. Work closely with cross-functional teams including product owners, developers, and QA to define system requirements and design specifications. Build and maintain reusable code, libraries, and components for future use. Conduct design/code reviews and enforce best practices in coding, testing, and deployment. Own the end-to-end technical delivery of projects, including infrastructure setup and CI/CD pipelines. Guide and mentor junior developers and act as a subject matter expert in Java and system design. Participate in client calls, provide technical leadership, and influence architectural direction. Mandatory Skills: 12+ years of experience in Java/J2EE development Strong experience in designing scalable architectures using Java (Spring Boot, Microservices) Cloud expertise – hands-on with AWS services (EC2, Lambda, S3, API Gateway, etc.) Experience with PostgreSQL – schema design, performance tuning, and advanced querying Strong proficiency with front-end frameworks – primarily Angular Expertise in RESTful API development and integration Familiarity with CI/CD pipelines, Git, Jenkins, Docker, and containerization practices Excellent problem-solving, debugging, and analytical skills Strong communication skills – both verbal and written Experience working in Agile/Scrum environments Preferred Skills: Exposure to other frontend technologies (React, TypeScript) Knowledge of Kubernetes, Terraform, or CloudFormation Experience with performance testing and monitoring tools (New Relic, Dynatrace, etc.) Previous experience in client-facing roles or technical leadership positions Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Upland Software) What do you need for this opportunity? Must have skills required: DevOps, PowerShell, CLI, Amazon AWS, Java, Scala, Go (Golang), Terraform Upland Software is Looking for: Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Upland’s product and influence decisions concerning solutions and techniques within their discipline. What would you do? Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for? Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python. Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelor’s degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members. About Upland Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually. Upland Software is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status or other legally protected status. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 - 11.0 years
8 - 15 Lacs
Hyderabad
Work from Office
The Team: This team is part of the global Application Operations and Infrastructure group that provides production support to Ratings Applications. These applications are critical for the Analysts who drive the business through their actions. Team is responsible for the high availability and resiliency of these applications. The Impact: As part of global team of engineers, provide production support for Tier-1 business critical applications. Troubleshoot application related issues and work with infrastructure & Database team to triage Major Incidents. Contribute to delivery of innovative and continuous highly reliable technology services. Strong focus towards developing shared integration services with automation and cloud enablement, guide the team to design technical solutions. Become an integral part of a high performing global network of engineers working from India, Denver, New York and London to help advance our technology. Whats in it for you: Working with a team of highly skilled, ambitious and result-oriented professionals. An ever-challenging environment to hone your existing skills in Automation, performance, service layer testing, SQL scripting etc. A plenty of skill building, knowledge sharing, and innovation opportunities. Building a fulfilling career with a global financial technology company. Ability to lead and build a world class production support group. Highly technical hands-on role which will help enhance team skills. Work on Tier-1 applications that are in the critical path for the business. Ability to work on cutting-edge technologies such as AWS, Oracle and Ansible. Ability to grow within the organization thats part of the global team. Responsibilities: This role requires extensive skills in operating within the AWS cloud platform, along with deep expertise in database engineering, performance tuning, backup and recovery solutions (such as Cohesity), cloud database technologies, and the auditing and security of database systems. Hands-on experience working with AWS cloud service provider. encompassing key services such as IAM (Identity and Access Management), Compute, Storage, Elastic Load Balancing, RDS (Relational Database Service), VPC (Virtual Private Cloud), TGW (Transit Gateway), Route 53, ACM, Serverless computing, Containerization, Account Administration, CloudWatch, CloudTrail etc. Additional experience with other cloud providers is advantageous. Proficiency in working with configuration management tools such as Ansible Solid understanding of CI/CD pipelines, utilizing tools such as Azure DevOps and GitHub for seamless integration and deployment. Proficiency in scripting languages such as PowerShell, Bash, and Python. Demonstrated ability to learn new technologies quickly and integrate them into existing systems. Collaborate with cross-functional teams to ensure the stability, security, and efficiency of our database environment Ability to support/resolve infrastructure related issues across different business applications. As part of global team of engineers, deliver innovative and continuous highly reliable technology services. Ability to communicate well and manage multiple initiatives with multiple engineers potentially across multiple time zones. Participate in on call and a weekly rotating shift schedule Involvement in Architecture and Development design reviews for new implementation and integration projects. Troubleshoot application related issues and work with infrastructure team to triage Major Incidents. Work with business users to understand needs, issues, develop root cause analysis and work with the team for the development of solutions and enhancements Manage the Error Budgets to measure risk, balance availability and feature development. Drive the automation to reduce the Manual Toil Measure, Track & Report the SLOs Create & Manage the Systems & Process documentation. Analyse & Conduct Post Incident reviews & drive the actions. What were looking for: Basic Qualifications: 6+ Years of IT Experience Bachelor MS degree in Computer Science, Engineering, or a related subject Ability to architect high availability application and servers on cloud adhering best practices. Hands-on experience using automation tooling like Shell, Python, Ansible and Terraform Hand-on experience with DevOps tools like ADO, Jenkins, Ansible Tower, Docker. Hands-on experience integrating AWS services like VPC, EC2, Route53, S3 to create scalable application environments. Experience performing Root Cause analyses and automating solutions to address underlying issues. Having exposure to Database technologies like Oracle, PostgreSQL, SQL Server, Mongo etc, are desirable A team player capable of high performance, flexibility in a dynamic working environment. Skill and ability to train others on technical and procedural topics. Ability to support/resolve infrastructure related issues as required. Preferred Qualifications: Bachelors degree in Computer Science, Engineering or a related technical discipline Proven working experience in AWS Cloud Platform Engineering Expert knowledge of Observability Tools like SPLUNK & Open Telemetry. Expert knowledge automating the building and deployment of containerized applications Expertise in Infra as Code automations Certification in AWS Cloud Technologies, DevOps preferred.
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: Collaborate with the Principal Architect to design and implement AI agents and multi-agent frameworks. Develop and maintain robust, scalable, and maintainable microservices architectures. Ensure seamless integration of AI agents with core systems and databases. Develop APIs and SDKs for internal and external consumption. Work closely with data scientists to fine-tune and optimize LLMs for specific tasks and domains. Implement ML Ops practices, including CI/CD pipelines, model versioning, and experiment tracking1. Utilize containerization technologies such as Docker and Kubernetes for efficient deployment and scaling of applications3. Leverage cloud platforms such as AWS, Azure, or GCP for infrastructure and services3. Design and implement data pipelines for efficient data ingestion, transformation, and storage4. Ensure data quality and security throughout the data lifecycle5. Mentor junior engineers and foster a culture of innovation, collaboration, and continuous learning. Qualifications: 8-10 years of experience in software engineering with a strong focus on AI/ML. Proficiency in frontend frameworks like React, Angular, or Vue.js. Strong hands-on experience with backend technologies like Node.js, Python (with frameworks like Flask, Django, or FastAPI), or Java. Experience with cloud platforms such as AWS, Azure, or GCP. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bellary, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: telemetry protocols (mavlink, dds, rtps),python,qt,drone integration,simulation tools,c/c++,javafx,simulation tools (gazebo, x-plane, matlab/simulink),java,react,rf/antenna setups,mission planning systems,networking fundamentals (udp/tcp, qos),linux/windows os,docker/kubernetes,flight testing & analysis,airworthiness standards,embedded hardware,ground control system Show more Show less
Posted 1 day ago
0.0 - 3.0 years
0 Lacs
Mohali, Punjab
On-site
The Role As a Software Engineer , you will play a pivotal role in designing, developing, and optimizing BotPenguin’s AI chatbot & Agents platform. You’ll collaborate with product managers, senior engineers, and customer success teams to develop robust backend APIs, integrate with frontend applications, and enhance system performance. This role offers exciting opportunities to build impactful AI-driven solutions and shape the future of conversational automation. What you need for this role Education: Bachelor's degree in Computer Science, IT, or related field. Experience: 1-3 years in software development roles. Technical Skills: Strong understanding of MEAN/MERN Stack technologies. Experience in designing and deploying end-to-end solutions. Hands-on experience in backend API development & UI integration. Familiarity with cloud platforms like AWS and containerization (Docker, Kubernetes). Understanding of AI/ML concepts in development. Knowledge of version control tools like GitLab/GitHub and project management tools like Notion . Soft Skills: Willingness to build something big, Strong problem-solving mindset, proactive approach, and a willingness to learn. What you will be doing Collaborate with the Product Team to plan and implement new features. Work alongside Technical Leads & Senior Developers to define solutions & low-level design. Develop backend APIs and integrate them with frontend applications. Conduct automated unit & integration testing to ensure high code quality. Document technical processes, APIs, and troubleshooting guides. Monitor system performance and suggest improvements to optimize efficiency. Assist the Customer Success Team in resolving technical challenges and enhancing user experience. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹200,000.00 - ₹600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Mohali, Punjab
On-site
The Role As a Product Technical Lead , you will act as the bridge between the product vision and technical execution. You will lead product architecture discussions, define technical roadmaps, and guide engineering teams to deliver high-performance, scalable solutions for our AI chatbot platform – BotPenguin. This is a high-impact role that demands strategic thinking, hands-on development expertise, and leadership skills to align cross-functional teams toward product success. You will be closely working with product managers, senior engineers, AI experts, and business stakeholders. You will also be responsible for conducting code reviews, mentoring junior developers, and ensuring high software quality standards. This role offers exciting opportunities to build impactful AI-driven solutions and shape the future of conversational automation. What you need for this role Education: Bachelor's degree in Computer Science, IT, or related field. Experience: 5 + years of experience in software engineering with at least 2+ years in a technical leadership role. Technical Skills: Proven experience in scalable system design and product architecture . Strong understanding of MEAN/MERN Stack technologies. Experience in software architecture planning and low-level design. Ability to define and implement product-level architectural patterns. Ability to create and implement scalable, high-performance solutions. Hands-on experience in backend API development & UI integration. Familiarity with cloud platforms like AWS and containerisation (Docker, Kubernetes). Understanding of AI/ML concepts in development. Knowledge of version control tools like GitLab/GitHub and project management tools like Notion . Soft Skills: Strong analytical mindset, leadership skills, and a passion for mentoring junior developers. What you will be doing Lead technical architecture design and roadmap planning for BotPenguin’s core platform. Work alongside the Product Manager to align product vision with technical execution. Collaborate with engineering teams to translate product requirements into scalable solutions . Design and develop core modules of the platform, especially those related to automation, chat assignment, analytics, and multi-agent support . Implement and enforce technical best practices , coding guidelines, and documentation standards. Evaluate and integrate LLM models, AI agents , and automation tools as per evolving product needs. Ensure performance, security, and scalability of applications across global deployments. Support Customer Success and QA teams with technical issue resolution and RCA . Drive technical discussions, conduct code reviews, and ensure timely feature delivery. Foster a culture of continuous improvement, collaboration, and innovation within the tech team. Collaborate with the Product Team to plan and implement technical solutions for new features. Work closely with Technical Leads & Senior Developers to define software architecture and create low-level designs. Conduct code reviews to ensure adherence to best practices and coding standards. Develop backend APIs and integrate them with frontend applications. Conduct automated unit & integration testing to ensure high code quality. Document technical processes, APIs, and troubleshooting guides. Monitor system performance and suggest improvements to optimize efficiency. Assist the Customer Success Team in resolving technical challenges and enhancing user experience. Mentor junior engineers, providing guidance on best practices and career growth. Any other task relevant to the product that may be needed. Top reasons to work with us Lead the architecture and evolution of a fast-growing AI product used globally. Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹1,800,000.00 - ₹2,000,000.00 per year Benefits: Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 1 day ago
4.0 - 9.0 years
25 - 30 Lacs
Noida, Pune
Hybrid
Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune&Noida . We are looking for candidates with 4 + years of experience in below skills - Primary skills : Jenkins Terraform Kubernetes Docker Python Bash Interested candidates for above position kindly share your CVs on varsha.si@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 1 day ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Work from Office
Cloud Infra and Devops Lead - J49135 Deep understanding of cloud platforms (AWS, Azure) and cloud-native services. Expertise in CI/CD tools (Jenkins, GitLab CI, Azure DevOps, etc.). Hands-on with Infrastructure as Code tools like Terraform. Biceps CloudFormation, ARM templates would be added advantage Knowledge in Kubernetes, Docker, and container orchestration. Strong understanding of networking, security, monitoring, and logging tools. Familiarity with automation tools like Ansible, Chef, or Puppe" Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA
Posted 1 day ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Work from Office
Cloud Infra and Devops Lead - J49135 Deep understanding of cloud platforms (AWS, Azure) and cloud-native services. Expertise in CI/CD tools (Jenkins, GitLab CI, Azure DevOps, etc.). Hands-on with Infrastructure as Code tools like Terraform. Biceps CloudFormation, ARM templates would be added advantage Knowledge in Kubernetes, Docker, and container orchestration. Strong understanding of networking, security, monitoring, and logging tools. Familiarity with automation tools like Ansible, Chef, or Puppe" Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA
Posted 1 day ago
6.0 years
60 - 65 Lacs
Greater Bhopal Area
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Chandigarh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Mysore, Karnataka, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mysore, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mysore, Karnataka, India
On-site
About The Opportunity Operating at the cutting edge of Aerospace & Unmanned Aerial Systems (UAS) , our Mobility Solutions division engineers next-generation ground-control hardware and software that connect autonomous aircraft to operators across complex environments. From mission-planning GUIs to secure telemetry links, we tackle real-time challenges where reliability, safety, and intuitive UX converge. Role & Responsibilities Co-develop ground-control software and workstation hardware for mission planning, telemetry monitoring, and command-and-control of multi-rotor and fixed-wing UAV fleets. Integrate GCS with avionics, nav-systems, and SATCOM/RF links, collaborating closely with flight-control, payload, and networking teams to ensure seamless data flow. Write, debug, and unit-test code in C/C++, Python, or Java; contribute to modular architectures that scale from desktop to ruggedized field stations. Configure, calibrate, and troubleshoot ground stations for lab, field-test, and customer demos, documenting best-practice deployment playbooks. Author and execute verification plans (SIL/HIL, regression, environmental) to validate performance, safety, and airworthiness compliance under diverse conditions. Analyse flight-test data to uncover issues, drive root-cause analysis, and recommend design or process improvements. Skills & Qualifications Must-Have Bachelor’s degree in Computer Science, Aerospace, Electronics, Robotics, or related discipline. 3-6 yrs experience building or testing ground-control stations, mission-planning software, or real-time operator consoles for UAVs or similar robotics. Proficiency in C/C++ or Python plus familiarity with version control and CI/CD pipelines. Working knowledge of telemetry protocols (MAVLink, DDS, RTPS) and networking fundamentals (UDP/TCP, QoS). Hands-on experience with simulation tools (e.g., Gazebo, X-Plane, MATLAB/Simulink) and basic flight-dynamics principles. Strong troubleshooting skills across Linux/Windows OS, embedded hardware, and RF/antenna setups. Preferred Exposure to airworthiness or safety standards (DO-178C, DO-330, DO-331). Experience integrating payload sensors (ISR, EO/IR, LIDAR) and autonomous mission workflows. Familiarity with Docker/Kubernetes for containerised GCS deployments. Prior participation in flight-test campaigns and post-mission data analytics. Knowledge of JavaFX, Qt, or React-based UIs for operator consoles. Certifications in drone pilot licensing or regulatory compliance (DGCA, FAA Part 107). Skills: Simulation tools,Airworthiness Standards,Drone integration,Flight testing & Analysis,Ground Control System,Mission planning systems Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
6.0 years
60 - 65 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Docker technology has gained immense popularity in the IT industry, and job opportunities for professionals skilled in Docker are on the rise in India. Companies are increasingly adopting containerization to streamline their development and deployment processes, creating a high demand for Docker experts in the job market.
These cities are known for their vibrant tech scene and host a large number of companies actively seeking Docker professionals.
The salary range for Docker professionals in India varies based on experience levels. Entry-level positions may start at around ₹4-6 lakhs per annum, while experienced Docker engineers can earn upwards of ₹15-20 lakhs per annum.
In the Docker job market, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving into roles like Tech Lead or DevOps Engineer as one gains more experience and expertise in Docker technology.
In addition to Docker expertise, professionals in this field are often expected to have knowledge of related technologies such as Kubernetes, CI/CD tools, Linux administration, scripting languages like Bash or Python, and cloud platforms like AWS or Azure.
As you explore job opportunities in the Docker ecosystem in India, remember to showcase your skills and knowledge confidently during interviews. By preparing thoroughly and staying updated on the latest trends in Docker technology, you can position yourself as a desirable candidate for top companies in the industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.