Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 8.00 + years Salary : USD 3555 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 4 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Springboot, Java, OOPs, Redis Global leader in data integrity is Looking for: Job Posting Description Job Title: Lead Software Engineer Department: DIS - Foundation Reports To: Location: Remote Our company is a global leader in data integrity, providing accuracy and consistency in data for 12,000 customers in more than 100 countries, including 90 percent of the Fortune 100. Our company's data integration, data quality, location intelligence, and data enrichment products power better business decisions to create better outcomes. We seek talented individuals with the experience and motivation to join our innovative team. PURPOSE OF THE POSITION As a Lead Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What you will do: Lead and contribute to end-to-end product development, with 8+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What we are looking for: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
8.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 8.00 + years Salary : USD 3555 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 4 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Springboot, Java, OOPs, Redis Global leader in data integrity is Looking for: Job Posting Description Job Title: Lead Software Engineer Department: DIS - Foundation Reports To: Location: Remote Our company is a global leader in data integrity, providing accuracy and consistency in data for 12,000 customers in more than 100 countries, including 90 percent of the Fortune 100. Our company's data integration, data quality, location intelligence, and data enrichment products power better business decisions to create better outcomes. We seek talented individuals with the experience and motivation to join our innovative team. PURPOSE OF THE POSITION As a Lead Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What you will do: Lead and contribute to end-to-end product development, with 8+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What we are looking for: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
8.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 8.00 + years Salary : USD 3555 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 4 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Springboot, Java, OOPs, Redis Global leader in data integrity is Looking for: Job Posting Description Job Title: Lead Software Engineer Department: DIS - Foundation Reports To: Location: Remote Our company is a global leader in data integrity, providing accuracy and consistency in data for 12,000 customers in more than 100 countries, including 90 percent of the Fortune 100. Our company's data integration, data quality, location intelligence, and data enrichment products power better business decisions to create better outcomes. We seek talented individuals with the experience and motivation to join our innovative team. PURPOSE OF THE POSITION As a Lead Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What you will do: Lead and contribute to end-to-end product development, with 8+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What we are looking for: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
8.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : USD 3555 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 4 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Springboot, Java, OOPs, Redis Global leader in data integrity is Looking for: Job Posting Description Job Title: Lead Software Engineer Department: DIS - Foundation Reports To: Location: Remote Our company is a global leader in data integrity, providing accuracy and consistency in data for 12,000 customers in more than 100 countries, including 90 percent of the Fortune 100. Our company's data integration, data quality, location intelligence, and data enrichment products power better business decisions to create better outcomes. We seek talented individuals with the experience and motivation to join our innovative team. PURPOSE OF THE POSITION As a Lead Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What you will do: Lead and contribute to end-to-end product development, with 8+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What we are looking for: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : Google Cloud Platform Architecture Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Technology Architect, you will design and deliver technology architecture for a platform, product, or engagement. Your typical day will involve collaborating with various teams to define solutions that meet performance, capability, and scalability needs. You will engage in discussions to ensure that the architecture aligns with business objectives and technical requirements, while also addressing any challenges that arise during the development process. Your role will require you to stay updated on industry trends and best practices to ensure that the solutions you propose are innovative and effective. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Evaluate and recommend new technologies to improve architecture efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Platform Architecture. - Strong understanding of cloud computing principles and best practices. - Experience with designing scalable and resilient cloud architectures. - Familiarity with containerization technologies such as Docker and Kubernetes. - Knowledge of security best practices in cloud environments. Additional Information: - The candidate should have minimum 5 years of experience in Google Cloud Platform Architecture. - This position is based at our Hyderabad office. - A 15 years full time education is required.
Posted 3 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Sparkrock is helping Ionic Partners to hire for this opening. Are you an expert Cloud professional who thrives on solving complex technical debt challenges and building order from chaos? Are you ready to build the technical foundation for a next-generation platform that will serve millions of users globally? Do you want to work for a best-in-class, 100% remote organization with the brightest talent from around the world? If so, then keep reading… At Ionic Partners, we invest in B2B software and software development companies that build products that create real customer value. We focus on driving growth and operational improvements to help accelerate companies over the second chasm. We are looking for a SaaS Chief Architect who will lead the architectural transformation of multiple SaaS products while building the technical foundation for a next-generation platform that will serve millions of users globally. Unlike typical architect roles focused on greenfield projects, you'll be the transformation champion who modernizes legacy acquisitions into cutting-edge, scalable SaaS solutions while maintaining business continuity. If you are highly motivated and love thriving in ambiguity, taking ownership, and enjoying deep-diving into hard problems, this is the place for you! Responsibilities Design and document comprehensive solution architectures for multiple SaaS products Lead infrastructure transformation initiatives from legacy to modern cloud-native architectures Create and maintain detailed technical documentation, including network diagrams, data flow diagrams, and system architecture diagrams Build and optimize CI/CD pipelines to enable rapid, reliable deployments across multiple products Conduct thorough infrastructure audits and create actionable remediation plans Write detailed product specifications, bridging business requirements and technical implementation Implement cost optimization strategies across multi-cloud environments (AWS, Azure, GCP) Champion DevOps transformation and operational excellence initiatives Mentor engineering teams on architectural best practices and design patterns Collaborate with acquisition teams to assess and integrate newly acquired products Establish architectural governance and technical standards across the organization Requirements Bachelor's degree in Computer Science, Engineering, or related field 12+ years in software architecture with at least 5 years in senior architectural roles Expert-level knowledge of cloud platforms (AWS, Azure, GCP) and multi-cloud strategies Proven experience in modernizing legacy applications and infrastructure Deep understanding of microservices, containerization (Docker, Kubernetes), and serverless architectures Expertise in CI/CD tools (Jenkins, GitLab CI, GitHub Actions, ArgoCD) Strong knowledge of Infrastructure as Code (Terraform, CloudFormation, Pulumi) Experience with cost optimization tools and FinOps practices Proficiency in multiple programming languages (Python, Go, Java, JS, C#) Expert in system design and distributed systems architecture Strong understanding of security best practices and compliance frameworks Experience with observability and monitoring solutions Exceptional communication skills to translate complex technical concepts to stakeholders Strong leadership and influence without authority Ability to work across time zones for global team collaboration On-call rotation participation for critical infrastructure issues Nice to have Certifications: AWS/Azure/GCP Solutions Architect, TOGAF Experience with M&A technical due diligence Background in specific compliance requirements (SOC2, HIPAA, GDPR) Experience with legacy technology stacks requiring modernization Published articles or speaking experience on architectural topics Benefits 100% remote and global Flexible work hours and asynchronous culture Tailored coaching and career development support Access to expert webinars and learning sessions Engaged virtual community: coffee chats, book clubs, trivia nights, and more! We believe in the power of people, platforms, and processes to transform the world. We envision a global economy that uses business as a force for good and creates opportunity for all. Our mission is to create a better, more equitable world through the power of software. Our companies build mission-critical B2B software that customers across the globe rely on to run their businesses. We embrace a growth mentality where we continuously learn, improve, and innovate to drive better and better outcomes. We take customer success seriously and believe that each and every one of us has an important role to play in making sure that our customers are genuinely excited to partner with us. We are a fully remote company and have been building our culture this way from day 1. This means that we get to work with the best people from around the world and can do so while living and growing in the communities we love. It also means that our platform and processes are specifically designed to allow people to be optimally productive on a truly flexible schedule. We are an equal opportunity employer and value diversity, equity, and inclusion. We believe that the best ideas come from diverse teams and that diverse teams are built intentionally. We want the best people from all around the world and are committed to creating an environment where people are empowered to give voice to their great ideas.
Posted 3 days ago
7.0 - 10.0 years
0 Lacs
India
Remote
Role Summary: As a Lead/Senior Full Stack Developer, you will play a key role in designing, developing, and delivering end-to-end software solutions. You will also be responsible for guiding a team of developers, ensuring code quality, mentoring junior team members, and actively participating in architecture and design discussions. Key Responsibilities: ● Lead and mentor a team of Full Stack Developers, providing technical guidance and code reviews ● Collaborate with cross-functional teams including product managers and QA to define, design, and ship new features ● Design client-side and server-side architecture ● Build intuitive, responsive, and visually appealing front-end applications ● Develop and maintain secure, high-performance backend systems using .NET technologies ● Write clean, scalable code and develop APIs for integration ● Manage database design and optimize queries for performance ● Ensure software is tested, responsive, efficient, and secure ● Set up and maintain CI/CD pipelines and version control processes ● Document development processes, architecture, and standard components ● Stay current with emerging technologies and industry trends Role Requirements: ● 7 to 10 years of professional experience in Full Stack Development ● At least 2 years of experience in leading or mentoring development teams ● Expertise in .NET, ASP.NET MVC, C#, and Reactjs (Mandatory) ● Strong command of front-end technologies: HTML5, CSS3, JavaScript, jQuery ● Proficient in writing and consuming APIs (RESTful services) ● Experience with SQL Server, and optionally MongoDB or Cosmos DB ● Familiarity with cloud platforms like Azure or AWS ● Experience with Git, CI/CD tools, and Agile development methodologies ● Strong understanding of software design patterns, best practices, and architecture principles ● Excellent problem-solving, debugging, and troubleshooting skills ● Strong written and verbal communication skills Preferred Skills: ● Experience working in Agile/Scrum environments ● Exposure to microservices architecture and containerization (e.g., Docker) ● Familiarity with performance monitoring tools and secure coding practices What We Offer: ● Opportunity to lead impactful projects with complete development lifecycle ownership ● Dynamic and collaborative team culture ● Remote flexibility and strong focus on work-life balance ● Competitive salary and career growth opportunities If you’re a seasoned full stack developer with a passion for technology and proven leadership capabilities, we’d love to hear from you. Apply now and be a part of Teknobloom’s growth story.
Posted 3 days ago
5.0 years
0 Lacs
India
Remote
Job Description: HPC Architect – Stealth AI Startup Location: Remote Experience: 5+ Years in High-Performance Computing (HPC) Type: Full-Time About Us: Join our innovative stealth AI startup, driving breakthroughs in Artificial Intelligence and High-Performance Computing (HPC). We are a well-funded company with an exceptional leadership team, dedicated to solving complex problems and pushing the boundaries of AI. We are looking for highly skilled HPC Architects passionate about optimizing scientific applications, designing scalable HPC systems, and developing AI-powered solutions. Responsibilities: Optimize and port scientific and AI/ML applications to harness maximum performance on HPC systems, including clusters, grids, and cloud platforms. Architect and evaluate diverse hardware architectures for performance optimization (e.g., GPUs, CPUs, and accelerators like Nvidia, AMD, Graphcore, and Cerebras). Spearhead benchmarking activities for evaluating HPC and AI applications using tools like MLPerf, LINPACK, and OSU benchmarks. Collaborate with research and engineering teams to understand application requirements and develop optimization strategies for parallel computing using CUDA, MPI, OpenMP, or OpenACC. Implement and manage HPC workloads using SLURM workload management systems. Lead the integration of HPC infrastructures with AI/ML pipelines for domains like computer vision, NLP, and LLMs using frameworks like TensorFlow, PyTorch, and OpenCV. Develop custom solutions for containerized deployments using tools like Docker, Kubernetes, Enroot, or Charliecloud. Mentor team members and provide technical guidance on HPC and AI solutions. Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Science, Electronics, Telecommunications, or a related field. Programming Expertise: Proficiency in C/C++, Python, and shell scripting. Parallel Computing Skills: Expertise in CUDA, MPI, OpenMP, OpenACC, and optimization techniques. HPC Applications Knowledge: Experience with applications like GROMACS, LAMMPS, OpenFOAM, Quantum Espresso, or WRF. Benchmarking and Profiling Tools: Hands-on experience with HPL, HPCC, Intel VTune, Nvidia Nsight, or GProf. HPC Architecture Experience: Familiarity with CPUs (x86, ARM), GPUs (Nvidia, AMD), and accelerators. AI/ML Frameworks: Practical experience with TensorFlow, PyTorch, Rapids, and Deepspeed for tasks like CNN, RNN, Transformers, NLP, and LLMs. DevOps Skills: Experience with containerization, cluster setup, and workload orchestration tools. Soft Skills: Excellent problem-solving, collaboration, and mentoring abilities. Preferred Skills: Relevant Experience in HPC-AI research initiatives in Technology Companies. In-depth understanding of HPC infrastructure setup, including job scheduling systems like SLURM. Strong exposure to MLOps practices and deployment tools like HuggingFace, Gradio, or Streamlit. What We Offer: Competitive salary and equity options. Opportunity to work with cutting-edge AI and HPC technologies. Collaborative, inclusive, and dynamic work culture. Work on impactful projects shaping the future of AI and supercomputing.
Posted 3 days ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Required Qualifications & Skills: 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong expertise in Azure Cloud & Infrastructure-as-Code (Terraform, CloudFormation). Proficient in Docker & Kubernetes. Hands-on with CI/CD tools & scripting (Bash, Python, or Go). Strong knowledge of Linux, networking, and security best practices. Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation Key Responsibilities: Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning. Manage secrets & credentials (Vault, Secrets Manager). Troubleshoot infrastructure & deployment issues. Implement blue-green & canary deployments. Collaborate with developers to enhance system reliability & productivity Preferred Skills: Certification -Azure Devops Engineer Experience with multi-cloud, microservices, event-driven systems. Exposure to AI/ML pipelines & data engineering workflows.
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Python React.js Full Stack Developer - Pune About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Key Skills: Python, Django/ Flask/ FastAPI framework, React.js, AWS, CI/CD Exp: 10+yrs Location – Hinjewadi, Pune Shift timings: 12:30PM- 9:30PM 3 days WFO (Tues, Wed, Thurs) Technical Requirement: Lead Full-Stack Developer (Python + React) Job Summary We are seeking a hands-on and highly collaborative Lead Full-Stack Developer to guide and grow our development team. In this leadership role, you will take ownership of designing, building, and delivering scalable, high-performing web applications, with an emphasis on backend development and cross-functional collaboration. You’ll lead a team of developers, mentor junior engineers, and serve as a bridge between technical implementation and business goals. The ideal candidate combines deep technical expertise in Python and React.js with strong communication and leadership skills and thrives in an Agile/Scrum environment. Key Responsibilities • Lead the design, development, and deployment of end-to-end web applications using Python and React.js. • Drive architecture and technical decisions, ensuring code quality, scalability, and maintainability. • Mentor and support a team of developers, conducting code reviews and promoting best practices. • Collaborate with product managers, UI/UX designers, and data teams to deliver intuitive and data-driven applications. • Utilize Python web frameworks (e.g., Flask, FastAPI, Django) for backend service development. • Develop and maintain frontend components using React.js, TypeScript, and CSS (beginner to intermediate level). • Work with Snowflake and SQL to build data integrations and support analytics workflows. • Lead Agile/Scrum processes, manage sprints in Jira, and conduct daily stand-ups and planning sessions. • Communicate clearly across teams and stakeholders, translating technical goals into business outcomes. Mandatory Qualifications & Skills • 5+ years of professional experience in full-stack development. • Proficiency in Python and modern Python web frameworks (Flask, FastAPI, or Django). • Strong experience with React.js and TypeScript in building user-facing web apps. • Solid understanding of CSS design principles (intermediate level or above). • Experience working with Snowflake and SQL for data handling and integration. • Proven leadership and communication skills, with experience mentoring engineers or leading technical discussions. • Familiarity with Agile/Scrum development processes and tools like Jira. Nice-to-Have Skills • Experience with AWS services (e.g., Lambda, S3, ECS). • Familiarity with CI/CD pipelines and deployment automation. • Proficiency with Git and collaborative version control workflows. • Exposure to Docker for containerization and environment consistency. Preferred Experience • 5+ years of full-stack development experience, including recent leadership or tech lead responsibilities. • Demonstrated success in delivering scalable applications in a collaborative, Agile team environment. • Prior experience leading cross-functional teams and driving technical initiatives. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.
Posted 3 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: Amazon Web Services / AWS Cloud Services Job Location: Chennai/Hyderabad Exp Range: 10Years to 12Years. Desired Competencies (Technical/Behavioral Competency): Cloud Infrastructure: AWS services: EC2, S3, VPC, IAM, Lambda, RDS, Route 53, ELB, CloudFront, Auto Scaling Serverless architecture design using Lambda, API Gateway, and DynamoDB Containerization: Docker and orchestration with ECS or EKS (Kubernetes) Infrastructure as Code (IaC): Terraform (preferred), AWS CloudFormation Hands-on experience creating reusable modules and managing cloud resources via code Automation & CI/CD: Jenkins, GitHub Actions, GitLab CI/CD, AWS CodePipeline Automating deployments and configuration management Scripting & Programming: Proficiency in Python, Bash, or PowerShell for automation and tooling. Monitoring & Logging: o CloudWatch, CloudTrail, Prometheus, Grafana, ELK stack. Networking: o VPC design, Subnets, NAT Gateway, VPN, Direct Connect, Load Balancing Security Groups, NACLs, and route tables · Security & Compliance: o IAM policies and roles, KMS, Secrets Manager, Config, GuardDuty o Implementing encryption, access controls, and least privilege policies.
Posted 3 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position: Data Platform Engineer Location: Gurgaon (Onsite) Experience: 4+ Years Employment Type: Full-time About the Role We are looking for a Data Platform Engineer with strong expertise in AWS SysOps, Amazon Redshift, Kafka administration, and Terraform. This role will involve managing and optimizing cloud-based data platforms, ensuring seamless data flow, and maintaining high-performance, scalable infrastructure. Key Responsibilities Manage and operate AWS environments with a focus on high availability, security, and performance. Administer and optimize Amazon Redshift clusters for analytics and data warehousing workloads. Perform Kafka cluster administration, including configuration, monitoring, scaling, and troubleshooting. Implement infrastructure as code using Terraform for consistent and repeatable deployments. Monitor system health and proactively resolve operational issues. Collaborate with data engineering and analytics teams to ensure smooth integration and data delivery. Maintain documentation of system configurations, processes, and troubleshooting guides. Required Skills & Qualifications 4+ years of experience in cloud infrastructure and data platform operations. Strong hands-on experience in AWS SysOps administration. Proven expertise in Amazon Redshift administration and performance tuning. Proficient in Kafka administration (setup, scaling, monitoring, and troubleshooting). Skilled in Terraform for infrastructure automation. Familiarity with AWS services like EC2, S3, IAM, CloudWatch, Lambda, etc. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Nice-to-Have Skills Experience with data ingestion/ETL pipelines. Familiarity with big data tools (Spark, Glue, EMR). Knowledge of containerization (Docker, Kubernetes). Education Bachelor’s degree in Computer Science, Information Technology, or equivalent practical experience. Work Mode Onsite – Gurgaon Office (Full-time presence required).
Posted 3 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
📣 We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. ✔️ Experience: Minimum 2+Years 📍 Locations: Noida ✔️ Must Have Required Skills: Minimum 2+year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling . Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma 🔎 Key Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. 🎓 Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. 🌐 To know more about us Visit : https://algo8.ai/ 📧 Interested Applicants can share the resume: smita.choudhury@algo8.ai
Posted 3 days ago
2.0 years
0 Lacs
Raipur, Chhattisgarh, India
Remote
Position: ML/AI Developer Experience: 2+ years Location: Raipur (Hybrid/On-site/ Remote) Overview: We at Magure Softwares are seeking a highly skilled ML/AI Developer who not only excels in building intelligent systems but also owns the complete lifecycle from development to deployment. You’ll be part of a cutting-edge team working on AI-driven solutions, real-time data pipelines, and scalable ML products that make real business impact. We are looking for someone who is not just comfortable with writing models but is also proficient in production-level deployment, versioning, MLOps, and problem-solving through DSA. Key Responsibilities: 1. Development & Model Engineering: Build, train, and optimize machine learning models for various domains. Use techniques such as regression, classification, NLP, deep learning, and RAG (retrieval augmented generation). Implement data structures and algorithmic approaches to optimize model performance. 2. Deployment & MLOps: Deploy ML models on cloud or containerized environments (Azure, AWS, GCP, Azure Container Apps, etc.). Develop CI/CD pipelines using tools like GitHub Actions, Docker, MLflow, and Kubernetes for automated training and deployment. Manage model versioning, logging, rollback, and monitoring. 3. Tooling & Framework Expertise: Work with ML frameworks and libraries such as PyTorch, TensorFlow, Scikit-learn, HuggingFace, LangChain, OpenAI API, and LLaMA Use Azure-based tools like Azure Document Intelligence, Azure AI Search, and Azure OpenAI. 4. Data Management: Automate data ingestion, validation (schema + drift checks), and preprocessing using Python and tools like Great Expectations. Handle structured, semi-structured, and unstructured data from sources like MongoDB, SQL, PDFs, etc. 5. Collaboration & Communication: Collaborate with backend engineers, data scientists, and product managers. Maintain clear documentation and contribute to knowledge-sharing sessions. Provide technical mentorship to junior developers. Required Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or equivalent. 2+ years of experience in building and deploying end-to-end ML solutions. Strong command of Python and ML/DL frameworks (PyTorch, TensorFlow, Sklearn). Hands-on experience with MLOps, CI/CD, containerization, and cloud deployments. Strong understanding of Data Structures & Algorithms (DSA). Familiarity with modern/Traditional AI/LLM applications (e.g., RAG, LLM fine-tuning, chatbot systems). Experience in building microservices or model APIs using FastAPI or Flask. What Will Make You Stand Out: You’ve deployed production-grade ML systems with rollback/version control. You write clean, scalable, and modular code — and understand its lifecycle in the real world. You have experience integrating AI with business tools like email systems, PDF document processing, and real-time analytics. Why Join Magure Softwares? Be part of AI solutions that drive real change. Work in a collaborative, fast-paced, and technically exciting environment. Opportunities to grow as a full-stack ML engineer, not just a data scientist. Competitive compensation, high-impact roles, and a company that values innovation. Apply Now: Send your resume to kirti@magureinc.com
Posted 3 days ago
3.0 years
12 - 30 Lacs
Bengaluru, Karnataka, India
On-site
We are a fast-growing IT services and consulting firm specializing in enterprise software development across finance, healthcare, and e-commerce sectors. Our on-site teams in India deliver mission-critical, Java-based applications that drive digital transformation and robust business outcomes for global clients. Role & Responsibilities Design, develop, and maintain scalable Java applications using Spring Boot and Hibernate frameworks. Implement RESTful APIs and integrate third-party services to support front-end applications and mobile clients. Collaborate with cross-functional teams (QA, DevOps, Business Analysts) to define requirements and deliver end-to-end solutions on schedule. Optimize application performance, troubleshoot production issues, and implement fixes and enhancements. Write clean, well-structured, and unit-tested code following industry best practices and coding standards. Participate in code reviews, design discussions, and mentor junior developers to foster continuous improvement. Skills & Qualifications Must-Have 3+ years of professional experience in Java development (Java 8 or higher). Hands-on expertise with Spring Boot and Hibernate/JPA frameworks. Strong knowledge of RESTful API design and implementation. Proficiency in SQL databases (MySQL, PostgreSQL) and writing optimized queries. Experience with version control (Git) and build tools (Maven or Gradle). Solid understanding of object-oriented design principles and software development life cycle (SDLC). Preferred Familiarity with NoSQL databases (MongoDB, Cassandra) and caching solutions (Redis). Exposure to containerization (Docker) and CI/CD pipelines (Jenkins, GitLab CI). Benefits & Culture Highlights Competitive salary with performance-based incentives and regular appraisals. Collaborative, learning-focused environment with access to technical workshops and certifications. On-site engagement in a centralized office hub with modern amenities and team events. Skills: algorithm,data structure,software development life cycle (sdlc),java,nosql databases,cassandra,hibernate/jpa,sql databases,docker,gradle,mysql,maven,mongodb,ci/cd pipelines (jenkins, gitlab ci),spring boot,git,object-oriented design principles,postgresql,restful apis,caching solutions (redis)
Posted 3 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Need Immediate Joiners Only 10 plus years' experience working in IT Development and/or Operations. 6 years' work experience as Development or Operations or Application support lead Experience in maintaining Multitier architecture applications including Java, Spring Modules, Springboot, SOA, Microservices, Kafka/RabbitMQ and Databases. In-depth understanding of ITSM process, implementation, and best practices 4 plus years of experience on DevOps and AWS Cloud Experience on Docker, Kubernetes, or similar containerization platforms Working experience of SDLC, Agile and Integrations Working knowledge / understanding of Network, Infrastructure and Operating systems Experience with Data Engineering/Analytics and application security, Business / Customer: Ensure robust Delivery. Maintain relationships by connecting with vertical and onsite program managers. Participate in regular governance meetings with customers. Own Transitioning of new engagements. Drive Transformation initiatives. Ensuring the customer presentations/visits are successful. Project/Process: Align and implement the practice/organization defined delivery drivers. Adherence to process based on organization/client standards frameworks and tools. Implement BIC framework within Delivery Unit. Ensure that Operations parameter targets are met such as utilization pyramid/span job rotation ELT induction etc. Ensure timely forecasting is done to meet the future resourcing requirements. Participate in organization and practice level initiatives.
Posted 3 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Position: Solution Architect Location: Mumbai About LRN: LRN is the world's leading dedicated ethics and compliance SaaS company, helping more than 30 million people every year navigate complex regional and global regulatory environments and build ethical, responsible cultures. With over 3,000 clients across the US, EMEA, APAC, and Latin America—including some of the world's most respected and successful brands—we're proud to be the long-term partner trusted to reduce organizational risk and drive principled performance. Named one of Inc Magazine's 5000 Fastest-Growing Companies, LRN is redefining how organizations turn values into action. Our state-of-the-art platform combines intuitive design, mobile accessibility, robust analytics, and industry benchmarking—enabling organizations to create, manage, deliver, and audit ethics and compliance programs with confidence. Backed by a unique blend of technology, education, and expert advisement, LRN helps companies turn their values into real-world behaviors and leadership practices that deliver lasting competitive advantage. About the role: LRN is seeking a seasoned Solution Architect with deep expertise in Java to lead the design and development of enterprise-grade applications. You will be instrumental in shaping our software architecture, driving best practices in Java-based development, and delivering scalable, secure, and high-performing solutions across multiple domains. Working closely with engineering, product, and QA teams, you'll architect backend systems, build RESTful APIs, and guide the integration of modern technologies in a distributed Agile environment. This role demands strong knowledge of service-oriented architecture, object-oriented programming, and cloud platforms like AWS. Experience with front-end frameworks (Angular), databases (PostgreSQL, MongoDB), and AI tools is a plus Requirements What you'll do: Design end-to-end architecture for enterprise web applications. Define technical strategy and ensure alignment with business objectives. Collaborate with stakeholders to convert business requirements into architectural blueprints . Select and recommend Java frameworks, JavaScript Frameworks, tools, and libraries that ensure scalability, performance, and maintainability. Ensure non-functional requirements such as performance, security, availability, and scalability are addressed. Review technical design documents, API contracts, and deployment architectures . Guide and mentor development teams in architectural best practices, coding standards, and design patterns . Evaluate and integrate third-party services, tools, and platforms . Ensure compliance with security and regulatory standards . Collaborate with DevOps teams to enable CI/CD pipelines , automated testing, and deployment strategies. Conduct code reviews and architecture reviews to maintain code quality and reduce technical debt. What we're looking for: Bachelor's or Master's degree in Computer Science, Engineering, or related field. TOGAF 9 Certification (or equivalent Enterprise Architecture certification). Proven experience designing and building large-scale web applications. Expert-level knowledge of Java (Java 21+), Spring Framework / Spring Boot, JPA/Hibernate, REST/GraphQL APIs, NodeJS and Angular Framework. Strong database expertise: Relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis). Proficiency in microservices architecture, containerization (Docker), and cloud platforms (AWS, Azure, or GCP). Experience with messaging systems (Kafka, RabbitMQ, ActiveMQ). Solid understanding of application security and OWASP Top 10 principles. Experience in performance optimization, caching strategies, and load testing. Familiarity with build tools (Maven, Gradle) and version control systems (Git). Excellent problem-solving and communication skills. Benefits Excellent medical benefits, including family plan Paid Time Off (PTO) plus India public holidays Competitive salary Combined Onsite and Remote Work LRN is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: We are seeking a highly skilled Senior Backend Java Developer with 4 or more years of hands-on experience in designing, developing, and maintaining robust backend systems. The ideal candidate will have strong expertise in Java, Spring, SQL, and unit testing frameworks such as JUnit or Mockito. Familiarity with AI-assisted development tools (like Claude Code, Cursor, GitHub Copilot) is essential. Experience with DevOps practices and cloud platforms like AWS is a plus. Responsibilities: Design, build, and maintain scalable and secure backend services using Java and the Spring framework. Develop and execute unit and integration tests using JUnit, Mockito, or equivalent frameworks. Collaborate with frontend engineers, DevOps, and cross-functional teams to deliver complete and reliable features. Write and optimize SQL queries and manage relational databases to ensure high-performance data operations. Leverage AI-assisted coding tools (e.g., Claude, Cursor, GitHub Copilot) to boost productivity and maintain code quality. Participate in code reviews, ensure adherence to best practices, and mentor junior developers as needed. Troubleshoot, diagnose, and resolve complex issues in production and staging environments. Contribute to technical documentation, architecture discussions, and Agile development processes (e.g., sprint planning, retrospectives). Qualifications: Strong proficiency in Java and object-oriented programming concepts. Hands-on experience with Spring / Spring Boot for building RESTful APIs and backend services. Proficiency in testing frameworks such as JUnit, Mockito, or equivalent. Solid experience in writing and optimizing SQL queries for relational databases (e.g., PostgreSQL, MySQL). Experience using AI-assisted coding tools (e.g.,ClaudeCode,Cursor, GitHub Copilot) in a production environment. Understanding of DevOps tools and practices (CI/CD, Docker, etc.) Experience with AWS services (e.g., EC2, RDS, S3, Lambda) Exposure to containerization and cloud-native development *Hybrid working for Mumbai or Pune.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description We are hiring Java Developer with Cloud at Noida location. Key Responsibilities Design and implement secure full stack applications using Java and Spring Boot Build and maintain REST APIs with a focus on performance and security Contribute to container-based deployments using Docker and Kubernetes Participate in CI/CD processes and Agile sprint ceremonies Collaborate with teams to improve system architecture and security posture Troubleshoot and resolve development, deployment, and runtime issues TECH STACK YOU’LL WORK WITH Needed (Strong Hands-on Experience Expected) Core Development: Java, Spring Boot, RESTful APIs Build Tools: Maven Security Concepts: Secure coding practices, understanding of authentication and authorization Version Control: Git DevOps: Docker (for containerizing microservices) IDEAL CANDIDATE PROFILE 3–5 years of experience in Java development Architecture & Design: Object-Oriented Design, common design patterns Exposure to cloud platforms (Azure/AWS) and containerization (Docker) Azure: VMs, Functions, AKS, RBAC AWS: EC2, Lambda, IAM, S3 Container Orchestration: Kubernetes (AKS/EKS) Basic understanding of fintech domain or secure systems Willingness to grow into areas like payments, EMV, and tokenization Eager to collaborate, learn, and contribute in a fast-paced environment DOMAIN KNOWLEDGE (Awareness is Sufficient) Card Tokenization Payment Authorization EMV Specification & APDU Format ISO 20022 payment messaging Various digital payment methods (NFC, wallets, cards)
Posted 3 days ago
2.0 years
0 Lacs
Greater Rajkot Area
On-site
Job Summary: We are seeking a talented and experienced Senior Odoo Developer to join our dynamic team. The ideal candidate will have a solid understanding of Odoo development, customization, and integration with at 2+ year of relevant experience. You will play a key role in designing, implementing, and maintaining Odoo solutions tailored to our business needs, ensuring a seamless workflow and user experience. Experience 2+ Year Location Onsite Rajkot, Gujarat Key Responsibilities ● Develop and customize Odoo modules to meet specific business requirements. ● Perform end-to-end development tasks, including coding, debugging, testing, and deployment. ● Integrate Odoo with third-party applications and APIs. ● Troubleshoot and resolve system issues, ensuring optimal performance and reliability. ● Collaborate with cross-functional teams to gather requirements and provide technical solutions. ● Maintain and upgrade Odoo versions, ensuring compatibility and functionality. ● Write and maintain clear, concise technical documentation. ● Mentor junior developers and contribute to knowledge sharing within the team. Qualifications ● Bachelor’s degree in Computer Science, Software Engineering, or a related field. ● Minimum of 1+ year of professional experience in Odoo development. ● Proficiency in Python programming and a strong understanding of Odoo framework. ● Experience with PostgreSQL and database management. ● Familiarity with front-end technologies such as HTML, CSS, JavaScript, and jQuery. ● Strong understanding of object-oriented programming (OOP) principles. ● Excellent problem-solving and debugging skills. Preferred Qualifications ● Experience with Odoo.sh or other cloud-hosted Odoo environments. ● Knowledge of Docker and containerization technologies. ● Understanding of ERP workflows across different domains (e.g., accounting, inventory, CRM). ● Experience with version control systems like Git. ● Strong communication and collaboration skills.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Experience Minimum 5 years of coding experience in ReactJS (TypeScript), HTML, CSS-Pre-processors, or CSS-in-JS in creating Enterprise Applications with high performance for Responsive Web Applications. Minimum 5 years of coding experience in NodeJS, JavaScript & TypeScript and NoSQL Databases. Developing and implementing highly responsive user interface components using React concepts. (self-contained, reusable, and testable modules and components) Architecting and automating the build process for production, using task runners or scripts Knowledge of Data Structures for TypeScript. Monitoring and improving front-end performance. Banking or Retail domains knowledge is good to have. ·Hands on experience in performance tuning, debugging, monitoring. Technical Skills Excellent knowledge developing scalable and highly available Restful APIs using NodeJS technologies Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token-based authentication (Rest, JWT, OAuth) Possess expert knowledge of task/message queues include but not limited to: AWS, Microsoft Azure, Pushpin and Kafka. Practical experience with GraphQL is good to have. Writing tested, idiomatic, and documented JavaScript, HTML and CSS Experiencing in Developing responsive web-based UI Have experience on Styled Components, Tailwind CSS, Material UI and other CSS-in-JS techniques Thorough understanding of the responsibilities of the platform, database, API, caching layer, proxies, and other web services used in the system Writing non-blocking code, and resorting to advanced techniques such as multi-threading, when needed Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model Documenting the code inline using JSDoc or other conventions Thorough understanding of React.js and its core principles Familiarity with modern front-end build pipelines and tools Experience with popular React.js workflows (such as Flux or Redux or ContextAPI or Data Structures) A knack for benchmarking and optimization Proficient with the latest versions of ECMAScript (JavaScript or TypeScript) Knowledge of React and common tools used in the wider React ecosystem, such as npm, yarn etc Familiarity with common programming tools such as RESTful APIs, TypeScript, version control software, and remote deployment tools, CI/CD tools An understanding of common programming paradigms and fundamental React principles, such as React components, hooks, and the React lifecycle Unit testing using Jest, Enzyme, Jasmine or equivalent framework ·Understanding of linter libraries (TSLINT, Prettier etc) Technical Skills Excellent knowledge in development and testing scalable and highly available Restful APIs / Microservices using JavaScript technologies Able to create end to end Automation test suites using Playwright / Selenium preferably using BDD approach. Practical experience with GraphQL. Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token-based authentication (Rest, JWT, oAuth) Possess expert knowledge of task/message queues including but not limited to: AWS, Microsoft Azure, Pushpin and Kafka Functional Skills Experience in following best Coding, Testing, Security, Unit testing and Documentation standards and practices Experience in Agile methodology. Effectively research and benchmark technology against other best in class technologies. Soft Skills Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft skills and interpersonal skills to interact and present the ideas to Senior and Executive management
Posted 3 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description We are hiring for Senior QA Engineer role at Noida location. Key Responsibilities Lead the design and implementation of scalable and maintainable test automation frameworks using Java, Cucumber, and Serenity. Review and optimize API test suites (functional, security, load) using REST Assured, Postman, and Gatling. Architect CI/CD-ready testing workflows within Jenkins pipelines, integrated with Docker, Kubernetes, and Cloud deployments (Azure/AWS). Define QA strategies and environment setups using Helm, Kustomize, and Kubernetes manifests. Validate digital payment journeys (tokenization, authorization, fallback) against EMV, APDU, and ISO 20022 specs. Drive technical discussions with cross-functional Dev/DevOps/R&D teams. Mentor junior QAs, conduct code/test reviews, and enforce test coverage and quality standards. IDEAL CANDIDATE PROFILE 4–8 years of hands-on experience in test automation and DevOps. Deep understanding of design patterns, OOP principles, and scalable system design. Experience working in cloud-native environments (Azure & AWS). Knowledge of APDU formats, EMV specs, ISO 20022, and tokenization flows is a strong plus. Exposure to secure payment authorization protocols and transaction validations. TECH STACK YOU’LL WORK WITH Languages & Frameworks: Java, JUnit/TestNG, Serenity, Cucumber, REST Assured Cloud Platforms: Azure (VMs, Functions, AKS), AWS (Lambda, EC2, S3, IAM) DevOps/Containerization: Jenkins, Docker, Kubernetes (AKS/EKS), Helm, Kustomize, Maven API & Performance Testing: Postman, Gatling Proficient in test environment provisioning and pipeline scripting Domain Knowledge Required Deep understanding of card tokenization, EMV standards, and APDU formats Experience with payment authorization flows across methods (credit, debit, wallets, NFC) Familiarity with ISO 20022 and other financial messaging standards
Posted 3 days ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Designation Associate Architect Experience- 7 to 12 years Job Location- Ahmedabad, Gujarat(Work from the office) How you should be? We are looking for a talented and experienced Architect to join our team. The ideal candidate will have 7+ years of experience. What you will do? • Collaborate with senior architects and project stakeholders to understand project requirements and objectives. • Assist in the design and development of architectural solutions that meet the needs of our clients. • Create architectural diagrams, models, and documentation to communicate design concepts and decisions. • Research emerging technologies and trends in architecture and recommend best practices. • Participate in architectural reviews and provide constructive feedback to team members. • Assist in the evaluation and selection of technology platforms, frameworks, and tools. • Work closely with development teams to ensure architectural alignment and adherence to design principles. • Support the implementation and deployment of architectural solutions, including troubleshooting and issue resolution. • Provide technical guidance and mentorship to junior team members. • Stay up-to-date with industry standards and regulations related to architecture and security. What we are looking for? • Computer Science, Engineering, or a related field. • Must have proven experience with .NET Core and Angular. • Strong understanding of software architecture principles and design patterns. • Proficiency in architectural modelling tools such as Enterprise Architect, ArchiMate, or similar. • Excellent communication and collaboration skills. • Ability to work effectively in a team environment and independently. • Strong analytical and problem-solving skills. • Familiarity with Agile methodologies. • Knowledge of Client Side Frameworks, Node, SQL, C#, Web API • Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform is a plus. • Knowledge of enterprise integration patterns and technologies. • Experience with microservices architecture and containerization technologies (e.g., Docker, Kubernetes). • Familiarity with architectural governance frameworks and processes. • Experience working on large-scale and complex projects. • Preferrable if any Certification like e.g. TOGAF, any cloud architect level certificate.
Posted 3 days ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Summary We are seeking a skilled Data Engineer to join our team. The candidate will be responsible for designing, building, and maintaining robust data infrastructure that powers PocketFM's recommendation systems, analytics, and business intelligence capabilities. This role offers an exciting opportunity to work with large-scale data systems that directly impact millions of users' audio entertainment experience. Key Responsibilities Data Infrastructure & Pipeline Development Design, develop, and maintain scalable ETL/ELT pipelines to process large volumes of user interaction data, content metadata, and streaming analytics Build and optimize data warehouses and data lakes to support both real-time and batch processing requirements Implement data quality monitoring and validation frameworks to ensure data accuracy and reliability Develop automated data ingestion systems from various sources including mobile apps, web platforms, and third-party integrations Analytics & Reporting Infrastructure Create and maintain data models that support business intelligence, user analytics, and content performance metrics Build self-service analytics platforms enabling stakeholders to access insights independently Implement real-time dashboards and alerting systems for key business metrics Support A/B testing frameworks and experimental data analysis requirements Data Architecture & Optimization Collaborate with software engineers to optimize database performance and query efficiency Design data storage solutions that balance cost, performance, and accessibility requirements Implement data governance practices including data cataloging, lineage tracking, and access controls Ensure GDPR and data privacy compliance across all data systems Collaboration & Support Work closely with data scientists, product managers, and analysts to understand data requirements Participate in code reviews and maintain high standards of code quality and documentation Mentor junior team members and contribute to knowledge sharing initiatives Required Qualifications Technical Skills Programming Languages: Proficiency in Python, SQL, and at least one of: Java, Scala, or Go Big Data Technologies: Hands-on experience with Apache Spark, Kafka, Airflow, and distributed computing frameworks Cloud Platforms: Strong experience with AWS, GCP, or Azure data services (S3, BigQuery, Redshift, etc.) Database Systems: Expertise in both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, Redis) databases Data Warehousing: Experience with modern data warehouse solutions like Snowflake, BigQuery, or Databricks Containerization: Proficiency with Docker and Kubernetes for deploying data applications Experience Requirements 2-4 years of experience in data engineering or related roles Proven track record of building and maintaining production data pipelines at scale Experience with streaming data processing and real-time analytics systems Strong understanding of data modeling, schema design, and data architecture principles Experience with version control systems (Git) and CI/CD pipelines Preferred Qualifications (Good to Have) Machine Learning & Model Operations Model Deployment: Experience deploying machine learning models to production environments using frameworks like MLflow, Kubeflow, or SageMaker MLOps Practices: Familiarity with ML pipeline automation, model versioning, and continuous integration for machine learning Advanced Technical Skills Experience with Vector Database, graph databases and knowledge graphs Understanding of data mesh architecture and domain-driven data design Experience with data privacy and security implementations
Posted 3 days ago
3.0 years
0 Lacs
Delhi, India
On-site
Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |