Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Andhra Pradesh
On-site
Design, build, and lead scalable fullstack applications Architect APIs and microservices with a focus on performance and maintainability Mentor junior developers and enforce coding best practices Collaborate with product and architecture teams to align with business goals Guide technology adoption and lead critical code reviews Java 11, Spring Boot, Angular React, REST APIs, Docker, Kubernetes, Microservices About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 23 hours ago
5.0 years
0 Lacs
Andhra Pradesh
On-site
Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora. Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora. Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 23 hours ago
4.0 years
16 - 20 Lacs
Greater Hyderabad Area
Remote
Experience : 4.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Northladder) (*Note: This is a requirement for one of Uplers' client - A Series B Funded Innovative Device Trade-In Company - Netherlands) What do you need for this opportunity? Must have skills required: Cloud Infrastructure, Unit Testing, Microservices, Node.js, AWS, MongoDB, TypeScript A Series B Funded Innovative Device Trade-In Company - Netherlands is Looking for: About NorthLadder NorthLadder, headquartered in Dubai, is the region’s fastest-growing digital platform enabling frictionless pre-owned electronics trade. Most of us know what it feels like to sell a pre-owned device - a smartphone, a laptop, or a tablet. The pre-owned market is in-transparent, and finding a fair price for your asset is daunting. Even if you get a fair price, meeting the buyer, negotiating the price, shipping the asset, and waiting for payment could be exhausting. And then there is the worry of figuring out what happens to the data on your device. This is why NorthLadder came to be. We are the region’s only auction-driven selling platform for pre-owned electronic devices. With our thoughtfully created service, people can sell their devices to a network of global buyers and get cash instantly, safely, dignifiedly, and hassle-free. About the role As an ideal candidate, you must be a problem solver with solid experience and knowledge in Node.js & TypeScript. You’ll be the brain behind crafting, developing, testing, going live and maintaining the system. You must be passionate in understanding the business context for features built to drive better customer experience and adoption. Our tech stack Node.js, TypeScript, MongoDB, AWS, AWS SQS, Microservices, and Kubernetes Requirements 1. At least 4 years of experience with Node.js & TypeScript 2. In-depth knowledge of microservices architecture and unit testing 3. A deep understanding of the Node.js Event Loop 4. Expertise in document-oriented databases, especially MongoDB 5. Experience in designing, building, and scaling back-end systems on cloud infrastructure 6. Strong commitment to improving product experience and user satisfaction Responsibilities 1. Consistently write high-quality, efficient code 2. Develop and maintain a comprehensive suite of automated tests, including unit, integration, E2E, and functional tests 3. Perform code reviews and ensure adherence to design patterns and the organization''s coding standards 4. Mentor junior developers, contributing to their technical growth 5. Collaborate with product and design teams to build user-focused solutions 6. Identify, prioritize, and execute tasks in the software development life cycle 7. Develop tools and applications by producing clean, efficient code 8. Troubleshoot, debug, and upgrade existing software 9. Recommend and execute improvements 10 . Collaborate with multidisciplinary teams to understand requirements and develop new solutions. LOCATION: WORK FROM HOME BUDGET: 16-20 LPA How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 23 hours ago
12.0 - 18.0 years
5 - 8 Lacs
Indore
On-site
Indore, Madhya Pradesh, India Qualification : We are seeking a seasoned Program Manager to lead strategic initiatives across enterprise-grade Java/J2EE applications deployed on modern hybrid cloud platforms. The ideal candidate will have a strong technical foundation, proven leadership in managing cross-functional teams, and strong ground experience with Kubernetes, cloud-native architectures, Microservice Architecture , and Agile delivery models. Skills Required : Program Manager, Microservice Based Architecture, Agile Model Role : Key Responsibilities Lead end-to-end program delivery for enterprise applications built on Java/J2EE stack, and Microservice Architecture. Manage multiple project streams across development, testing, deployment, and support. Collaborate with engineering, DevOps, QA, and business stakeholders to ensure alignment and timely delivery. Drive cloud migration, new development and modernization efforts using platforms like AWS, Azure, GCP or private cloud. Oversee container orchestration and microservices deployment using Kubernetes. Establish and monitor KPIs, SLAs, and program health metrics. Manage risks, dependencies, and change control processes. Ensure compliance with security, governance, and regulatory standards. Facilitate Agile ceremonies and promote continuous improvement. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 12 to 18 years of experience in IT, with at least 5 years in program/project management. Strong ground in Java/J2EE enterprise application development, and Microservice Architecture. Strong ground with any cloud platforms (AWS, Azure, GCP, or Private Cloud). Proficiency in Kubernetes, Docker, and containerized deployments. Familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code. Excellent communication, stakeholder management, and leadership skills. PMP, PMI-ACP, or SAFe certification is a plus. Experience : 12 to 18 years Job Reference Number : 13223
Posted 23 hours ago
16.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a hands-on DevOps SME/Architect with 14–16 years of experience to lead and enhance our DevOps processes for a 24–30 member team. The ideal candidate will identify gaps in current methodologies, build a robust backlog, set standards for monitoring and automation, and provide hands-on support to resolve complex issues. This role requires deep expertise in CI/CD pipelines, containerization, and modern DevOps tools, along with strong leadership to guide the team toward operational excellence. Key Responsibilities: • Gap Analysis & Strategy: Assess current DevOps practices, identify inefficiencies, and propose actionable improvements to optimize workflows. • Backlog Development: Build and prioritize a comprehensive DevOps backlog to address gaps, enhance automation, and improve system reliability. • Hands-On Leadership: Actively participate in troubleshooting, debugging, and resolving critical issues, providing hands-on support when required. • CI/CD Pipeline Management: Design, implement, and optimize CI/CD pipelines using Jenkins and other tools to ensure seamless software delivery. • Containerization & Orchestration: Lead the adoption and management of containerized environments using Docker and Kubernetes, ensuring scalability and reliability. • Automation & Monitoring Standards: Establish best-in-class standards for monitoring, logging, and automation to enhance system performance and uptime. • Scripting & Development: Develop and maintain scripts in Groovy, Python, or other relevant languages to automate processes and improve efficiency. • Team Collaboration: Mentor and guide a team of 24–30 DevOps engineers, fostering a culture of continuous improvement and collaboration. • Stakeholder Engagement: Work closely with development, QA, and operations teams to align DevOps strategies with business objectives. Required Skills and Qualifications: • Experience: 14–16 years of hands-on DevOps experience, with a proven track record as a DevOps SME or Architect. Technical Expertise: • Deep knowledge of Jenkins for CI/CD pipeline development and management. • Proficiency in scripting languages such as Groovy and Python. • Extensive experience with containerization (Docker) and orchestration (Kubernetes). • Strong understanding of monitoring and automation tools to set enterprise-grade standards. • Leadership: Ability to lead and mentor a large team, drive process improvements, and build a prioritized backlog. • Problem-Solving: Strong analytical skills to identify gaps, troubleshoot complex issues, and implement effective solutions. • Hands-On Approach: Willingness to dive into technical challenges and provide hands-on support when needed. • Communication: Excellent verbal and written communication skills to collaborate with cross-functional teams and stakeholders. • Location: Must be based in or willing to relocate to Hyderabad, India.
Posted 23 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction: About Quranium In a world where rapid innovation demands uncompromising security, Quranium stands as the uncrackable foundation of the digital future. With its quantum-proof hybrid DLT infrastructure, Quranium is redefining what's possible, ensuring data safety and resilience against current and future threats, today. No other blockchain can promise this level of protection and continuous evolution. Quranium is more than a technology—it's a movement. Empowering developers and enterprises to build with confidence, it bridges the gaps between Web2 and Web3, making digital adoption seamless, accessible, and secure for all. As the digital superhighway for a better future, Quranium is setting the standard for progress in an ever-evolving landscape. Role Overview We are hiring a DevOps Engineer to architect and maintain the infrastructure supporting our blockchain nodes and Web3 applications. The ideal candidate has deep experience working with GCP, Azure, AWS, and modern hosting platforms like Vercel, and is capable of deploying, monitoring, and scaling blockchain-based systems with a security-first mindset. Key Responsibilities Blockchain Infrastructure Deploy, configure, and maintain core blockchain infrastructure such as full nodes, validator nodes, and indexers (e.g., Ethereum, Solana, Bitcoin) Monitor node uptime, sync health, disk usage, and networking performance Set up scalable RPC endpoints and archive nodes for dApps and internal use Automate blockchain client upgrades and manage multi-region redundancy Web3 Application DevOps Manage the deployment and hosting of Web3 frontends, smart contract APIs, and supporting services Create and maintain CI/CD pipelines for frontend apps, smart contracts, and backend services Integrate deployment workflows with Vercel, GCP Cloud Run, AWS Lambda, or Azure App Services Securely handle smart contract deployment keys and environment configurations Cloud Infrastructure Design and manage infrastructure across AWS, GCP, and Azure based on performance, cost, and scalability considerations Use infrastructure-as-code (e.g., Terraform, Pulumi, CDK) to manage provisioning and automation Implement cloud-native observability solutions: logging, tracing, metrics, and alerts Ensure high availability and disaster recovery for critical blockchain and app services Security, Automation, and Compliance Implement DevSecOps best practices across cloud, containers, and CI/CD Set up secrets management and credential rotation workflows Automate backup, restoration, and failover for all critical systems Ensure infrastructure meets required security and compliance standards Preferred Skills And Experience Experience running validators or RPC services for Proof-of-Stake networks (Ethereum 2.0, Solana, Avalanche, etc.) Familiarity with decentralized storage systems like IPFS, Filecoin, or Arweave Understanding of indexing protocols such as The Graph or custom off-chain data fetchers Hands-on experience with Docker, Kubernetes, Helm, or similar container orchestration tools Working knowledge of EVM-compatible toolkits like Foundry, Hardhat, or Truffle Experience with secrets management (Vault, AWS SSM, GCP Secret Manager) Previous exposure to Web3 infrastructure providers (e.g., Alchemy, Infura, QuickNode) Tools and Technologies Cloud Providers: AWS, GCP, Azure, Vercel DevOps Stack: Docker, Kubernetes, Terraform, GitHub Actions, CircleCI Monitoring: Prometheus, Grafana, CloudWatch, Datadog Blockchain Clients: Geth, Nethermind, Solana, Erigon, Bitcoin Core Web3 APIs: Alchemy, Infura, Chainlink, custom RPC providers Smart Contracts: Solidity, EVM, Hardhat, Foundry Requirements 3+ years in DevOps or Site Reliability Engineering Experience with deploying and maintaining Web3 infrastructure or smart contract systems Strong grasp of CI/CD pipelines, container management, and security practices Demonstrated ability to work with multi-cloud architectures and optimize for performance, cost, and reliability Strong communication and collaboration skills What You'll Get The opportunity to work at the intersection of blockchain infrastructure and modern cloud engineering A collaborative environment where your ideas impact architecture from day one Exposure to leading decentralized technologies and smart contract systems Flexible work setup and a focus on continuous learning and experimentation
Posted 23 hours ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Meet Our Team Pega is changing the way the world builds software. We at Pega are providing revolutionary solutions for some of the world’s largest organizations and the most recognizable brands. Imagine going from a problem to a fully functioning solution in production that solves real business problems within a matter of a few hours/days. That’s a challenging set of expectations to meet where thousands of businesses across the planet depend on Pega to transform their business and customer experiences and that’s where you come in. You will be joining our team as Pega Infinity Platform Specialist fully trained to be a subject master expert in the design and architecture of our Pega InfinityTM Platform. You will be working with a group of enthusiastic, high spirited, and smart working individuals, who encourage each other to bring out the best to build a world-class product in Low-code space. The team follows the "work hard play hard" mindset. We are passionate about our work creating a difference and feel extreme ownership to achieve meaningful outcomes that matter. Picture Yourself At Pega Pega is a low-code platform for AI-powered decisioning and workflow automation. In this role, you will spend your building core features in Pega Infinity Platform, hardening and patching critical existing defects and enhancing the platform where required to keep it current with Industry demands . You will be the respected engineer in your area, with complete mastery of your code base. You are someone who follows best practices to build clean maintainable code. As a Senior Software Engineer, you will have the opportunity to be trained as a Subject Matter Expert (SME) in background processing features like Queue processors, Agents and Job schedulers. You should have strong expertise in multi-threading , concurrent programming , object-oriented programming and design along with Java 8 features like Lambda expressions and collections framework. What You'll Do At Pega You will start by learning about Pega platform and use it to rapidly build a sample application Actively involved feature development , hands-on coding and debugging for your team's feature areas. You should write high-quality, efficient, and maintainable code while ensuring timely delivery. Actively collaborate with Product owner & team members to work on backward compatible and targeted solutions for keeping platform current with the industry demands. Understand design and implementation of your product area and become an expert Contribute towards decioins around scalability, performance, and maintainability Write automated tests for code changes made to fix issues Own quality and maintenance of your product area and collaborate with quality services organization (consists of experts in quality assurance area) to learn and apply the latest best practices to harden our software Work closely with various stakeholders such as Product owner, Architects and Quality Engineers. Create and curate knowledge base articles and improve documentation Use various tools, languages and libraries like Git, Gradle, Docker, Jenkins, IntelliJ, Linux, Java, JUnit, JGiven, Groovy, Spock, cloud technologies Who You Are You are a skilled Engineer, who is hands-on and curious to learn and passitonate towards working on feature development. You take quality seriously and ensure high reliability quality in the codebase. You enjoy exploring the latest developments and best practices in the software industry and apply them at work. You love to dive deep into Java and JVM to build a deeper and broader understanding of its semantics and workings. You are a good team player who puts team first and can collaborate with other team members. You take constructive feedback with an open mind and work to continuously improve yourself. You Have 2-4 years of software development experience, preferably in a product development company Bachelor’s or master’s degree in computer science engineering or similar field Strong understanding of object-oriented programming and design, continuous integration and delivery (CI/CD) Experience in Multi-Threading and concurrent programming Agile/Scrum development methodology knowledge/experience Excellent communication skills, both written and verbal. Experience using: Java, JUnit, IntelliJ/Eclipse, Jenkins, Linux Good to have: Experience in ELK Stack, Kubernetes and Cloud Deployment. Strong interest and desire to learn and develop using Pega Platform What You've Accomplished Developed functional, robust, resilient and scalable software built using Java Worked with cross-functional teams to deliver scalable, reliable, and high-performance software solutions Working experience in an Agile/Scrum team environment. Pega Offers You Gartner Analyst acclaimed technology leadership across our categories of products Continuous learning and development opportunities An innovative, inclusive, agile, flexible, and fun work environment Competitive global benefits program inclusive of pay + bonus incentive, employee equity in the company Job ID: 22209
Posted 23 hours ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Job Summary: We are seeking an experienced Senior DevOps Engineer to lead infrastructure automation efforts, implement CI/CD pipelines, and support cloud platform integrations. The ideal candidate will possess strong technical expertise across DevOps tools and cloud environments, and will play a key role in optimizing deployment workflows and supporting data engineering initiatives. Key Responsibilities Design, implement, and optimize CI/CD pipelines using industry-standard tools. Automate infrastructure and configuration management using Ansible, Chef, Puppet, and Terraform. Work with cloud platforms such as Microsoft Azure, AWS, and container services like AKS. Develop automation scripts and tools using Python, JavaScript, PowerShell, and Java. Implement and manage API Management and Apache Airflow workflows. Collaborate with developers and SRE teams to streamline deployment strategies and improve operational efficiency. Support data engineering teams by applying DevOps best practices and automation. Core Skills CI/CD tools: GitHub, Jenkins, etc. Configuration Management: Ansible, Chef, Puppet Infrastructure as Code: Terraform Cloud Platforms: Azure, AWS, AKS Scripting/Programming: Python, Java, JavaScript, PowerShell Workflow Management: Apache Airflow API Management Preferred Skills Hands-on experience with Cribl or working knowledge of log routing/observability tools. Soft Skills & Expectations Ability to quickly learn and adopt new tools and technologies. Strong collaboration and communication skills. Proactive and flexible in cross-functional team environments. Skills Devops,Azure Aks,Kubernetes
Posted 23 hours ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description Job Title: NET Developer (C#, ASP.NET MVC, WPF, Azure) Location: Kochi, TVM, Chennai & Bangalore Key Responsibilities Design, develop, and maintain applications using C# and .NET (Framework/Core). Build and enhance WPF-based desktop applications with a focus on rich UI/UX using XAML. Develop and deploy cloud-native applications on Microsoft Azure. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, scalable, maintainable code adhering to software development best practices. Participate in code reviews, unit testing, and debugging activities. Ensure high performance, quality, and responsiveness of applications. Maintain technical documentation and support continuous improvement initiatives. Must-Have Skills Strong proficiency in C# and .NET Framework / .NET Core Hands-on experience with ASP.NET MVC architecture Proficiency in WPF and XAML for desktop application development Experience with Microsoft Azure services (e.g., App Services, Azure Functions, Blob Storage) Solid understanding of Object-Oriented Programming (OOP) and design patterns Experience with RESTful APIs, Entity Framework, and SQL Server Familiarity with version control systems, especially Git Excellent problem-solving and communication skills Good-to-Have Skills Experience with CI/CD pipelines and DevOps practices on Azure Familiarity with Agile/Scrum development methodologies Knowledge of unit testing frameworks (e.g., MSTest, NUnit) Exposure to containerization tools (e.g., Docker) and Azure Kubernetes Service (AKS) Basic understanding of frontend technologies like HTML/CSS/JavaScript for full-stack collaboration Skills Azure,C# .Net,Mvc,Wpf
Posted 23 hours ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Your IT Future, Delivered. Senior Software Engineer (Java 8/11 & Google Cloud) With a global team of 5800 IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. Our offices in Cyberjaya, Prague, and Chennai have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. Join us at DHL Group Digital Platforms department, where innovation thrives and technology evolves. With a presence across three countries and two continents, we're united by the drive to create a One Stop Shop for all DHL Group APIs. Our startup spirit within a stable, large-scale company framework propels us to integrate Agile and DevSecOps into our secure, efficient, and flexible delivery and support operations. About the Project - Maia: Embark on a journey with Maia, DHL’s integral part for shipping labels. Maia is a logistics-oriented API-based solution aiming to revolutionize & simplify our internal core business operations. We're harnessing modern Cloud Technologies like Google PaaS, Google ApiGee, and Google Kubernetes Engine to deliver excellence, all while operating under the SCRUM methodology. #DHL #DHLITServices #GreatPlace #digitalplatforms #api Grow together. As a Senior Software Engineer: You should have: Strong knowledge of Java technologies (Java 8/11, Spring Framework, JUnit, Maven, REST & SOAP APIs, Git Flow) Solid expertise in Kubernetes and Docker. Experience with cloud-based solutions (Google Cloud preferred) Hands-on experience with microservice-based architectures. Proficiency in CI/CD solutions (e.g., Jenkins) Experience with WebMethods integration platform. Passion for learning new technologies and building new projects. Willingness to go the extra mile to ensure project success. Nice to have: Experience with scalable cloud systems and dependable distributed systems Experience working in a Scrum development team environment Knowledge of Reactive programming frameworks (e.g., React, Netty) Experience with automated test frameworks (e.g., RestAssured, Robot Framework) Basic knowledge of NoSQL databases (e.g., Cassandra) Basic knowledge of messaging infrastructure (e.g., Kafka, IBM MQ) Proficiency in Node.js An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications.
Posted 23 hours ago
0.0 - 2.0 years
0 Lacs
India
Remote
Role: Junior DevOps Engineer Duration: 12 Month Location: Remote (Bengaluru) Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0-2 years About The Role Must have skills: Should be good with Linux / windows Good understanding of Azure / AWS services Good Knowledge and experience in Ansible Good knowledge of docker and kubernetes Roles And Responsibilities You will be responsible for maintaining the DevOps practices in the data centre and cloud. As a DevOps Engineer, you will need to understand DevOps practices, IT, Microservices, Kubernetes, Docker, Jenkins, and Monitoring strategies. Added advantage if you have scripting experience i.e. docker files, C.I/C. D pipelines, shell, and python scripting Assist in tasks for CICD tools like Jenkins, Gitlab, and many others. Support Automation and infrastructure as code (LaC)tools Work with Linux Operating System and cloud environment Working on ways to automate and improve development and release processes Ensuring that systems are secure against cybersecurity threats. Other Personal Characteristics Dynamic, engaging, self-reliant developer Ability to deal with ambiguity Manage a collaborative and analytical approach Self-confident and humble Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people Be equally comfortable and capable interacting with technologists as they are with business executives.
Posted 23 hours ago
4.0 years
16 - 20 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 4.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Northladder) (*Note: This is a requirement for one of Uplers' client - A Series B Funded Innovative Device Trade-In Company - Netherlands) What do you need for this opportunity? Must have skills required: Cloud Infrastructure, Unit Testing, Microservices, Node.js, AWS, MongoDB, TypeScript A Series B Funded Innovative Device Trade-In Company - Netherlands is Looking for: About NorthLadder NorthLadder, headquartered in Dubai, is the region’s fastest-growing digital platform enabling frictionless pre-owned electronics trade. Most of us know what it feels like to sell a pre-owned device - a smartphone, a laptop, or a tablet. The pre-owned market is in-transparent, and finding a fair price for your asset is daunting. Even if you get a fair price, meeting the buyer, negotiating the price, shipping the asset, and waiting for payment could be exhausting. And then there is the worry of figuring out what happens to the data on your device. This is why NorthLadder came to be. We are the region’s only auction-driven selling platform for pre-owned electronic devices. With our thoughtfully created service, people can sell their devices to a network of global buyers and get cash instantly, safely, dignifiedly, and hassle-free. About the role As an ideal candidate, you must be a problem solver with solid experience and knowledge in Node.js & TypeScript. You’ll be the brain behind crafting, developing, testing, going live and maintaining the system. You must be passionate in understanding the business context for features built to drive better customer experience and adoption. Our tech stack Node.js, TypeScript, MongoDB, AWS, AWS SQS, Microservices, and Kubernetes Requirements 1. At least 4 years of experience with Node.js & TypeScript 2. In-depth knowledge of microservices architecture and unit testing 3. A deep understanding of the Node.js Event Loop 4. Expertise in document-oriented databases, especially MongoDB 5. Experience in designing, building, and scaling back-end systems on cloud infrastructure 6. Strong commitment to improving product experience and user satisfaction Responsibilities 1. Consistently write high-quality, efficient code 2. Develop and maintain a comprehensive suite of automated tests, including unit, integration, E2E, and functional tests 3. Perform code reviews and ensure adherence to design patterns and the organization''s coding standards 4. Mentor junior developers, contributing to their technical growth 5. Collaborate with product and design teams to build user-focused solutions 6. Identify, prioritize, and execute tasks in the software development life cycle 7. Develop tools and applications by producing clean, efficient code 8. Troubleshoot, debug, and upgrade existing software 9. Recommend and execute improvements 10 . Collaborate with multidisciplinary teams to understand requirements and develop new solutions. LOCATION: WORK FROM HOME BUDGET: 16-20 LPA How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 23 hours ago
67.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Description Role Senior Developer NodeJS Experience 67 Years Location India Technical Skills Primary Must Have Strong work experience of API development using Nodejs Strong knowledge of JSON requestresponse structures and RESTful APIs experience understanding Java code and upgrading it to Nodejs Proficient in JavaScript for scripting and automation tasks Handson experience with Apigee API Gateway for API management and security Solid understanding of YAML for defining infrastructure as code Strong problemsolving and communication skills Experience with CICD tools and practices such as Jenkins GitLab CI or CircleCI Knowledge of containerization and orchestration tools such as Docker and Kubernetes Proven experience as a DevOps Engineer with a focus on JSON Kafka JavaScript Apigee and YAML Experience in creating configuring and managing Kafka topics for eventdriven architectures Familiarity with cloud platforms such as AWS Azure or Google Cloud Good To have Insurance Domain Knowledge Skills Mandatory Skills : Express,Node.js
Posted 23 hours ago
130.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Northern Trust, a Fortune 500 company, is a globally recognized, award-winning financial institution that has been in continuous operation since 1889. Northern Trust is proud to provide innovative financial services and guidance to the world’s most successful individuals, families, and institutions by remaining true to our enduring principles of service, expertise, and integrity. With more than 130 years of financial experience and over 22,000 partners, we serve the world’s most sophisticated clients using leading technology and exceptional service. Summary: Northern Trust is looking for a Senior Lead Java Software Engineer to join its Technology Development Centre in Pune, India in the Investment Management team. This individual as part of an agile development team, will be responsible for analysis and design of upcoming Alternatives business platform that meet business and technical requirements. Responsibilities: Analyse and build the data model from requirements of Private Equity, Hedge Fund businesses of Northern Trust Asset Management upcoming Alts data warehouse. Analyse the source data while working with upstream teams and 50 South development team to produce schema. Build pipelines to extract required data from upstream systems and model them for reporting to clients and downstream systems. Break down requirements to domain, model and entity data for setup in data warehouse. Able to define the Raw, Transform and Curate layers for data consumption. Liaise with various vendor products, internal applications to refine the requirements to help technical team solutions. First point of contact for clarification of any business gaps in Tech team locally. Participate in data modeling discussions and ensure the data warehouse model meets business needs. A team player with an ability to own the design and code as per requirement given. Communicate status (written and verbal) to project team and management Continuously looks for ways to improve the application’s stability, scalability and user experience. Experience: Bachelor or equivalent degree in finance with technology background. 11-15 years experienced technical engineer who can develop and maintain high-performance, reliable, and scalable Java-microservice architecture applications. Strong on Design and implement cloud-native applications on Microsoft Azure, utilizing services like Azure App Services, Azure Functions, and Azure Kubernetes Services (AKS), ADF, Azure Networking concepts. Write clean, reusable, and well-documented code. Collaborate with cross-functional teams, including UI/UX designers, QA engineers, and product managers. Ensure applications adhere to high performance, scalability, and security standards. Leverage Azure DevOps for CI/CD pipelines and automation. Monitor, troubleshoot, and optimize performance for cloud-hosted applications. Integrate data storage solutions using Azure SQL, Snowflake, or other database technologies. Stay updated with emerging technologies and cloud trends to continuously enhance systems and solutions. Required Skills: Strong expertise in Java (Java 8 and java 17 or higher). Proficiency in frameworks like Spring Boot, Microservice Architecture. Experience in cloud-native development and deployment on Microsoft Azure. Hands-on experience with Azure services such as Azure App Services, Functions, Kubernetes (AKS), Azure DevOps, Blob Storage, and Service Bus. Knowledge of RESTful APIs, SOAP, and microservices architecture. Solid understanding of database technologies (e.g., Azure SQL, MySQL, Cosmos DB, PostgreSQL). Experience with version control systems like Git. Familiarity with containerization tools such as Docker and orchestration tools like Kubernetes. Strong understanding of design patterns, algorithms, and data structures. Excellent problem-solving, debugging, and analytical skills. Design, develop and use data structures and data marts to support reporting. Good analytical and problem-solving skills. Both attention to detail & ability to rise above details to see broader implications & recommend strategic solutions. Self-starter; Positive & adaptable in a continually changing environment. Ability to work independently and with a team. Proven interpersonal and communication skills with technical & business partners. Strong understanding of building CI/CD pipelines for change management. Preferred/ Recommended Skills: Familiarity with Change management process. Financial domain knowledge – Investment Management, portfolio construction and risk management. Worked on project streamlining the testing process by introducing automation, leveraging tools and setting goals to reduce time and effort. Experience with Azure Data Factory (ADF) for building and orchestrating data pipelines. Knowledge of messaging systems like Kafka. Certification in Microsoft Azure (e.g., Azure Developer Associate or Azure Solutions Architect). Familiarity with front-end technologies like JavaScript, Angular, or React
Posted 23 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Req ID: 336749 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Environment Manager to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Analyse and identify the linkages and interactions between the component parts of an entire system. Partner with team leadership to ensure collective ownership of quality, timelines, and deliverables. Develop skills outside your comfort zone, and encourage others to do the same. Use the review of work as an opportunity to deepen the expertise of team members. Address conflicts or issues, engaging in difficult conversations with clients, team members and other stakeholders, escalating where appropriate. Uphold and reinforce professional and technical standards, the Firm's code of conduct, and independence requirements. Job Summary A career in our consulting team will provide you an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Responsibilities As a Environment Manager in the consulting services Platform, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy through execution and beyond. Professional skills and responsibilities for this level include but are not limited to: Design, implement, and maintain test environments to support software testing needs. Ensure test environments mirror production systems as closely as possible. Oversee environment provisioning, configuration, and decommissioning processes. Maintain and track test environment inventory, dependencies, and configurations. Ensure the availability and stability of environments to avoid downtime for testing teams. Work closely with QA, development, DevOps, and infrastructure teams to manage environment requirements. Facilitate effective communication and scheduling for test environment usage. Act as the single point of contact for test environment issues and escalations. Monitor environment health, performance, and utilization using appropriate tools. Identify and resolve environment-related defects, conflicts, and bottlenecks. Troubleshoot and resolve environment-related defects, deployment issues, and conflicts. Implement automated monitoring and alerting for test environment health checks. Provide root cause analysis (RCA) for environment failures and incidents Utilize and manage tools for test environment provisioning, automation, and monitoring (e.g., Terraform, Ansible, Jenkins, Docker, Kubernetes). Implement cloud-based and on-premise test environments using platforms like AWS, Azure, or GCP. Integrate with CI/CD pipelines to enable smooth deployments and test execution. Leverage monitoring and observability tools to track performance, logs, and availability. Serve as the single point of contact for test environment issues, scheduling, and maintenance. Coordinate with IT support teams for troubleshooting infrastructure or deployment issues. Implement test environment automation to enhance efficiency and reduce manual efforts. Define and enforce best practices for test environment management. Maintain and improve CI/CD pipelines for environment provisioning and deployments. Ensure proper version control and rollback strategies for test environments. Ensure test environments comply with security policies, regulations, and data protection standards. Manage access control and permissions for test environments. Support audits and compliance reviews related to test environment management. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 1 day ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚨 Urgent Hiring: Java Spring Boot Developer – 6+ Years Experience |Gurugram, (On-site) 📍 Location: Gurgaon , India Type: Full-Time (preferred) Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel. The ideal candidate should possess a strong understanding of enterprise integration patterns, stream processing, and protocols, and have experience with observability tools and concepts in AI-enhanced applications. Key Responsibility : Design, develop, and deploy Java Spring Boot(must) applications on OpenShift(ready to learn RedHat open shift or already have Kubernetes experience). Build robust data pipelines with Apache Kafka(must) for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Came l(basic knowledge and ready to learn) and Enterprise Integration Patterns (EIPs). Develop components that interface with various protocols including HTTP, JMS , and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, Open Telemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus) Experience with stream processing technologies such as Apache Flink, Kafka Streams, or Spark Streaming. Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms. In-depth understanding of database concepts – both relational and NoSQL. Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments ⚠️ Important Notes Only candidates with a notice period of 20 days or less will be considered PF account is Must for joining Full time If you have already applied for this job with us, please do not submit a duplicate application. Budget is limited and max CTC based on years of experience and expertise. 📬 How to Apply Email your resume to career@strive4x.net with the subject line: Java Spring Boot Developer - Gurugram, Please include the following details Full Name Mobile Number Current Location Total Experience (in years) Relevant Experience (in years) Current Company Current CTC Expected CTC Notice Period Are you open to relocating to Gurgaon (Yes/No)? Do you have PF account (Yes/No)? Do you prefer full time or Contract or both ? 👉 Know someone who fits the role? Tag or share this with them #JavaJobs #SpringBoot #GurgaonJobs #Kafka #ApacheCamel #OpenShift #HiringNow #SoftwareJobs #SeniorDeveloper #Microservices #Strive4X
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities. • Design, build, and maintain automated workflows using tools like n8n, Make, Zapier, and similar platforms. • Integrate third-party applications using APIs to optimize and automate internal processes across departments. • Develop custom scripts and logic using Python or similar languages to support more complex workflows. • Leverage LLMs (like OpenAI/GPT) to power business use cases; apply prompt engineering best practices for optimal model performance. • Collaborate cross-functionally to identify automation opportunities and translate them into efficient solutions. • Implement scalable, secure, and reliable CI/CD pipelines for deploying automation scripts and services. • Support containerized deployments (Docker, Kubernetes) where needed. • Stay up to date with new AI tools, automation platforms, and APIs that can improve efficiency and innovation. Requirements. Must-Have : • Strong experience with workflow automation platforms like n8n, Make, or Zapier. • Solid Python scripting experience (or similar programming language). • Good understanding of APIs—authentication, data formats, integration patterns. • Familiarity with LLM models and how to apply prompt engineering for real-world applications. • Demonstrated ability to learn new tools quickly and apply creative thinking to problem-solving. • Excellent logical reasoning and aptitude—able to dissect complex systems and build simplified, automated versions. Nice-to-Have: • Experience with containerization tools like Docker or Kubernetes. • Exposure to CI/CD pipelines and version control (Git). • Background in DevOps or systems integration. • Familiarity with data pipelines, webhooks, or serverless frameworks.
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction IBM Security Verify is placed in Gartner Leadership Quadrant as a cloud-based Identity and Access Management (IAM) solution that helps organizations manage user identities and access to applications and resources. It provides features like multi-factor authentication, single sign-on, risk-based authentication, and adaptive access as well as user lifecycle journeys along with associated governance, aiming to protect customer, workforce, and privileged identities. The solution also offers identity analytics to provide insights into user behavior and potential risks. Your Role And Responsibilities Contribute to backend feature development in a microservices-based application using Java or GoLang. Develop and integrate RESTful APIs and connecting backend systems to frontend or external services. Collaborate with senior engineers to understand technical requirements and implement maintainable solutions. Participate in code reviews, write unit/integration tests, and support debugging efforts. Gain hands-on experience with CI/CD pipelines and containerized deployments (Docker, basic Kubernetes exposure). Support backend operations including basic monitoring, logging, and troubleshooting under guidance. Engage in Agile development practices, including daily stand-ups, sprint planning, and retrospectives. Demonstrate a growth mindset by learning cloud technologies, tools, and coding best practices from senior team members. Preferred Education Master's Degree Required Technical And Professional Expertise 3+ years of backend development experience using Java, J2EE, and/or GoLang. Hands-on experience building or supporting RESTful APIs and integrating backend services. Foundational understanding of Postgres or other relational databases, including basic query writing and data access patterns. Exposure to microservices principles and containerization using Docker. Basic experience with CI/CD pipelines using tools like Git, GitHub Actions, or Jenkins. Familiarity with backend monitoring/logging tools such as ELK Stack or Grafana is a plus. Exposure to cloud platforms like AWS or Azure, and ability to deploy/test services in cloud environments under guidance. Knowledge of writing unit tests and basic use of testing tools like JUnit or RestAssured. Exposure to Agile software development processes like Scrum or Kanban. Good communication skills, strong problem solving skills and willingness to collaborate with team members and learn from senior developers. Preferred Technical And Professional Experience Exposure to microservices architecture and understanding of modular backend service design. Basic understanding of secure coding practices and awareness of common vulnerabilities (e.g., OWASP Top 10). Familiarity with API security concepts like OAuth2, JWT, or simple authentication mechanisms. Awareness of DevSecOps principles, including interest in integrating security into CI/CD workflows. Introductory knowledge of cryptographic concepts (e.g., TLS, basic encryption) and how they're applied in backend systems. Willingness to learn and work with Java security libraries and compliance-aware coding practices. Exposure to scripting with Shell, Python, or Node.js for backend automation or tooling is a plus. Enthusiasm for working on scalable systems, learning cloud-native patterns, and improving backend reliability.
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a dynamic professional with strong experience in Databricks and Machine Learning to design and implement scalable data pipelines and ML solutions. The ideal candidate will work closely with data scientists, analysts, and business teams to deliver high-performance data products and predictive models. Key Responsibilities Design, develop, and optimize data pipelines using Databricks, PySpark, and Delta Lake Build and deploy Machine Learning models at scale Perform data wrangling, feature engineering, and model tuning Collaborate with cross-functional teams for ML model integration and monitoring Implement MLflow for model versioning and tracking Ensure best practices in MLOps, code management, and automation Must-Have Skills Hands-on experience with Databricks, Spark, and SQL Strong knowledge of ML algorithms, Python (Pandas, Scikit-learn), and model deployment Familiarity with cloud platforms (Azure / AWS / GCP) Experience with CI/CD pipelines and ML lifecycle management tools Good To Have Exposure to data governance, monitoring tools, and performance optimization Knowledge of Docker/Kubernetes and REST API integration
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description: - We are looking for 3-5 years of experience in cloud platforms like AWS Note:- This is for Ahmedabad location and work from office mode with Rotational shifts Job Summary: The Cloud Subject Matter Expert (SME) will be responsible for providing expertise and guidance on cloud computing technologies, ensuring the effective implementation and management of cloud infrastructure, and driving innovation through cloud solutions. Cloud SME will work closely with IT teams, stakeholders, and external vendors to design, deploy, and maintain scalable, secure, and efficient cloud environments. Key Responsibilities: • Cloud Architecture & Design: o Develop and design hybrid cloud reference architecture and define target state AWS and Azure architecture. Monitoring: o Lead cloud projects, including planning, execution, and monitoring support with Knowledge of DevOps and Monitoring tools like Site24x7, DataDog, FreshService, etc. • Cloud Management & Optimization: o Monitor and manage cloud infrastructure to ensure optimal performance and availability. o Implement best practices for cloud management, including cost management, security policies, and resource optimization. Technical Support & Troubleshooting: o Provide advanced technical support and troubleshooting for cloud-related issues. o Act as a point of escalation for complex cloud problems. • Security & Compliance: o Ensure cloud solutions comply with security standards, regulatory requirements, and best practices. o Implement and maintain robust security measures to protect cloud data and applications. • Technical Skills: o In-depth knowledge of cloud architecture, services, and deployment models. o Experience with cloud platforms like AWS, Azure, Google Cloud Platform (GCP). o Proficiency with Cloud and DevOps monitoring tools like, New Relic, Site24x7, Datadog, etc o Proficiency with infrastructure as code (IaC) tools such as Terraform, CloudFormation, or ARM templates. o Strong understanding of networking, virtualization, and storage in cloud environments. o Familiarity with DevOps and Agile methodologies, processes, and tools such as CI/CD pipelines, Jenkins, Docker, and Kubernetes. Certifications: o Relevant cloud certifications such as AWS Certified Solutions Architect – Professional, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Certified, or similar. o Experience with DevOps Tools & Technologies (Git, Docker, Kubernetes, CI/CD Tools, Terraform) o Experience with ITSM and Monitoring Tools (FreshService, ServiceNow, Site24x7, Datadog) o Experience with scripting languages (e.g., Python, Bash) is a plus. o Terraform Certified (Good to have).
Posted 1 day ago
0.0 - 7.0 years
0 - 1 Lacs
Noida, Uttar Pradesh
On-site
Sr. Azure DevOps Engineer ( immediate joiners only) Experience: 7+ Years Salary : 85k-120k in hand Job Type: Contractual / Full time Joining : Immediate( Note : Do not apply if you are not a immediate joiner) Location: Noida – Hybrid ( once or twice in a week) Candidate with strong communication skill preferred Job Summary: We are seeking an experienced Sr Azure DevOps Engineer with strong hands-on expertise in Docker, Kubernetes, and Jenkins within an on-premises environment . The ideal candidate will have a solid background in CI/CD pipeline development, container orchestration, and automation tools, along with scripting and core Java knowledge. Key Responsibilities: Design, deploy, and manage containerized applications using Docker and Kubernetes on on-prem infrastructure Develop and maintain CI/CD pipelines using Jenkins , Azure DevOps , Git , Artifactory , and Docker Registry Create and manage Docker images , and ensure secure and scalable microservices deployment Use configuration management tools like Ansible or Chef Support and automate application builds and deployments Collaborate with development teams to optimize and streamline DevOps practices Implement robust solutions for monitoring, logging, and security Work with infrastructure components such as Active Directory, LDAP, DNS, DHCP , and firewall configurations Leverage scripting (e.g., Bash , Python ) for automation and task efficiency Apply knowledge of core Java, J2EE, Spring for application support and integration Preferred Qualifications: Experience in cloud-native and microservices architecture Familiarity with Azure DevOps , CI/CD practices, and cloud environments Certifications like CKA (Certified Kubernetes Administrator) or CKAD (Certified Kubernetes Application Developer) are a plus Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹80,000.00 - ₹120,000.00 per month Experience: Azure Devops: 7 years (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person
Posted 1 day ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Purpose The position of Consultant is within TTEC Digital - Analytics team. Analytics group is responsible for Data Science and Engineering projects that include the design and validation of data models, build systems to collect, manage and convert transactional raw data to usable data structures to generate insights for decision making Our Data Engineers work with Data Scientists, Project Leads, Managers on implementation, upgrade, and migration projects. Key Responsibilities Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Create solution and design documentation Work on projects independently as well as being part of a large team Develop internal training, process and best practices Crosstrain Junior Data Engineers or other team members with your area of expertise Further develop skills both on the job and through formal learning channels Assist in pre-sales activities by providing accurate work estimate Interacts closely with Project Management to deliver projects that are done on time and on budget. Competencies Personal: Strong interpersonal skills, high energy and enthusiasm, integrity, and honesty; flexible, results oriented, resourceful, problem-solving ability, deal effectively with difficult situations, ability to prioritize. Leadership: Ability to gain credibility, motivate and provide leadership; work with a diverse customer base; maintain a positive attitude. Provide support and guidance to more junior team members, particularly for challenging and sensitive assignments Operations: Ability to manage multiple projects and products. Perform task at hand in a customer friendly manner while utilizing time and resources efficiently and effectively. Utilize high level expertise to address more difficult situations, both from a technical and customer service perspective. Technical: Ability to understand and communicate technical concepts; proficient with Microsoft Project, Visio and Office products. Technical Skills Python (pydata, pandas, numpy, pyspark) SQL (MS SQL, OracleDB, Terradata) Azure Data Factory Azure Data Bricks Big Data (Spark, pig, hive, scoop, kafka etc.) DevOps (using tools such as GITHUB Actions and Jenkins is preferred) Agile/Scrum Rest Services and API Management: Implementing API proxies through gateways using Apigee X and/or Apigee Edge API design, development, and testing including creating SWAGGER/Open API specs Education, Experience And Certification Post-Secondary Degree (or Diploma) related to Computer Science, MIS or IT-related field. BA/BS in unrelated field will also be considered depending on experience 2-4 years in Data Engineering Exposure to application design and development experience in a cloud environment 2+ years of experience building and deploying containerized applications in a Kubernetes enabled environment 2+ years of experience coding REST services and APIs using one or more of the following: Python, C#, Node.js , Java Certified Kubernetes Application Developer Google Cloud Certified Apigee API Engineer TTEC Digital and our 1,800+ employees, pioneer engagement and growth solutions that fuel the exceptional customer experience (CX). Our sister company, TTEC Engage, is a 60,000+ employee service company, with customer service representatives located around the world. TTEC Holdings Inc. is the parent company for both Digital and Engage. When clients have a holistic need, they can draw from these independently managed centers of excellence, TTEC Digital and TTEC Engage. TTEC is a proud equal opportunity employer where all qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability. TTEC has fully embraced and is committed to expanding our diverse and inclusive workforce. We strive to reflect the communities we serve while delivering amazing service and technology centered around humanity. Rarely do applicants meet all desired job qualifications, so if you feel you would succeed in the role above, please take a moment and share your qualifications.
Posted 1 day ago
3.0 - 5.0 years
13 - 15 Lacs
Pune, Maharashtra, India
On-site
About The Opportunity We are a high-growth enterprise AI platform provider in the cloud services & SaaS sector, modernizing data pipelines and automating knowledge work for Fortune 500 clients. Our hybrid teams in Pune and Mumbai build production-grade generative AI solutions on Microsoft Azure—enabling real-time insights, intelligent agents, and scalable RAG applications with robust security and responsible-AI guardrails. Role & Responsibilities Architect, prototype, and deploy GenAI applications (LLMs, RAG, multimodal) on Azure OpenAI, Cognitive Search, and Kubernetes-based microservices. Build and orchestrate agentic frameworks (LangChain, AutoGen) for multi-agent reasoning, tool-calling, and end-to-end workflow automation. Engineer low-latency, high-throughput data and prompt pipelines using Azure Data Factory, Event Hub, and Cosmos DB. Optimize model performance and cost via fine-tuning, quantization, scalable caching on Azure ML and AKS. Implement production-grade CI/CD, observability (App Insights, Prometheus), security, and responsible-AI guardrails. Collaborate cross-functionally with product, design, and customer success teams to deliver measurable business impact. Skills & Qualifications Must-Have 3-5 years hands-on Generative AI/LLM engineering (GPT, Llama 2, Claude) with at least one solution in production. Proficiency with Microsoft Azure services: Azure OpenAI, Functions, Data Factory, Cosmos DB, AKS. Strong Python & TypeScript skills with experience in agentic frameworks (LangChain, AutoGen, Semantic Kernel) and REST/GraphQL APIs. Solid foundation in cloud MLOps: Docker, Helm, Terraform/Bicep, GitHub Actions or Azure DevOps. Proven ability to optimize end-to-end GenAI pipelines for performance, cost efficiency, and reliability. Preferred Experience scaling GenAI pipelines to >10 K QPS using vector databases (Pinecone, Qdrant) and distributed caching. Familiarity with prompt engineering, fine-tuning methodologies, and retrieval-augmented generation best practices. Knowledge of Kubernetes operators, Dapr, and service mesh patterns for resilient microservices. Benefits & Culture Highlights Competitive salary and flexible hybrid work model in Pune and Mumbai offices. Rapid career growth within a pioneering AI leadership team. Collaborative, innovation-driven culture emphasizing ethical and responsible AI. Skills: Generative AI,Azure,Python,LLMs,SQL Azure,agentic framework,Langgraph,autogen,CI,Cd,Kubernetes,Microsoft Azure,Cloud
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Do you thrive in a fast-paced and dynamic startup environment? Do you like being part of a rapidly growing organization and growing with it? Privacera is a leading provider of software products that Fortune 1,000 organizations use to discover, organize, catalog, protect, and govern their cloud and on-premise data assets. At Privacera, you will collaborate with talented colleagues who are passionate about delivering value for enterprise customers.Privacera has recently started their operations in India. We’re looking for: As a Software engineer at Privacera. In this role, you should be able to work independently with little supervision. You should have excellent organization and problem-solving skills. If you also have hands-on experience in software development and agile methodologies, we’d like to meet you Your responsibilities will include: Contributing independently to software development projects. Designing and developing high-quality, reliable software solutions. Conducting validation and verification testing to ensure product functionality. Collaborating with internal teams to enhance and refine existing products. Reviewing and debugging code to maintain performance and quality. Your experience should include: Bachelor’s / Master’s degree in Computer Science or Information Technology or a related field. 6+ years of experience in software development. 6+ years of experience in java and related technologies. 3+ years of experience in Java multi-threading development 3+ years of experience with RESTful API development. 2+ years of experience with Linux or bash scripting 2+ years of experience in SQL queries 2+ years of experience with JUnit/Mockito for unit testing. 2+ years of experience of build tool i.e Maven and version control i.e git Nice to have: Experience of Apache Ranger and data governance. Experience of microservices and spring framework. Experience with kubernetes/docker deployment. Experience of cloud services Azure/AWS/GCP and Snowflake/Databricks etc. Experience of big data technologies such as Apache Spark and EMR. Experience with Jenkins and CI/CD pipelines. What we offer: Joining the Privacera team means you are joining the team recognized by Forbes as one of America’s Best Startup Employers ! You will be embarking on a journey with a passionate group of people building important technologies for the enterprise industry. Let your ideas, experience, and expertise help shape the future of data governance and security. Company Overview: At Privacera, we are solving deep security and governance challenges across hybrid cloud environments. Enterprises are moving data to the cloud at a rapid pace and enabling multiple cloud services to enable the use of their data. Enabling the right users to have access to the right data is a significant challenge in this cloud-based architecture. Privacera’s solution enables a single pane of glass for enterprise visibility of data and enables centralized controls over data in the cloud. Privacera is growing rapidly via the intersection of the two hottest trends in the enterprise industry; enterprise data moving into the public cloud, and the need for security, governance, privacy which are becoming table stakes in every industry. Company Culture: Our corporate culture is built on the core values of openness, product excellence, and customer-centricity. We encourage team members to practice open communication, move fast and break old molds in our journey to build the best products in the world. We are a transparent, customer focused team that collaboratively gets things done. Our Commitment to Diversity and Inclusion: At Privacera, we are committed to building a diverse and inclusive culture where our teams can thrive. It is our priority to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Candidates looking for employment at Privacera are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Posted 1 day ago
7.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
We are #hiring #seniorjavadeveloper Job Title: Senior Java Developer Location: #Remote #india Job Type: Full-time Experience Level: 7+ years About the Role: We are seeking a Senior Java Developer with strong analytical skills and a deep understanding of Java technologies to join our development team. You will be responsible for designing, developing, and maintaining high-performance, scalable enterprise applications. Responsibilities: Design, develop, and maintain robust Java-based applications. Collaborate with cross-functional teams to define, design, and deliver new features. Write well-designed, efficient, and testable code. Participate in code reviews and mentor junior developers. Debug and resolve production issues. Ensure best practices for security, scalability, and performance are followed. Contribute to the entire software development lifecycle (SDLC). Requirements: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. 7+ years of experience in Java development. Strong knowledge of Java 8/11+, Spring Framework (Spring Boot, Spring MVC, Spring Security). Proficiency in RESTful API development and integration. Experience with ORM frameworks such as Hibernate or JPA. Strong understanding of databases (MySQL, PostgreSQL, or Oracle). Familiarity with CI/CD pipelines and tools like Jenkins, Maven/Gradle. Experience with version control systems (e.g., Git). Knowledge of microservices architecture and containerization (Docker, Kubernetes) is a plus. Familiarity with cloud platforms (AWS, Azure, or GCP) is an advantage. Preferred Qualifications: Experience with front-end technologies (JavaScript, Angular, React) is a plus. Understanding of Agile/Scrum methodologies. Excellent problem-solving and communication skills. pls share profiles at shreshthi@atf-labs.com or shreshthichoudhary809@gmail.com
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France