Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Software Developers collaborate with Business and Quality Analysts, Designers, Project Managers and more to design software solutions that will create meaningful change for our clients. They listen thoughtfully to understand the context of a business problem and write clean and iterative code to deliver a powerful end result. By balancing strong opinions with a willingness to find the right answer, Software Developers bring integrity to technology, ensuring all voices are heard. At Thoughtworks, we believe in going above and beyond the standard and are committed to delivering best-in-class solutions that exceed our clients' expectations. Our standard engineering and delivery practices reflect our commitment to quality, and our team is always looking to innovate and improve. Job responsibilities You will learn and adopt best practices like writing clean and reusable code using TDD, pair programming and design patterns You will use continuous delivery practices as needed to deliver high-quality software and value to end customers You will work in collaborative, value-driven teams to build innovative customer experiences for our clients You will collaborate with a variety of teammates to build features, design concepts and interactive prototypes and ensure best practices and UX specifications are embedded along the way You will partner with other technologists from cross-functional teams advocating and demonstrating DevOps culture You will take ownership and accountability beyond individual deliverables, always pushing the envelope in order to deliver awesome results for our clients You will learn, digest and subsequently apply the latest technology thinking from our to solve client problems Job qualifications Technical Skills You have two or more years* of experience You have experience in Python and have experience in using one or more development languages (Java, Kotlin, JavaScript, TypeScript, Ruby, C#, etc.) with experience in Object-Oriented programming You can write clean, high-quality code in a variety of languages and are also able to spot (and improve) bad code You are familiar with Agile, Lean and/or Continuous Delivery You have a good awareness of TDD, continuous integration and continuous delivery approaches/tools Bonus points if you have knowledge of cloud technology such as AWS, Docker or Kubernetes Professional Skills You thrive in a collaborative, non-hierarchical environment that values transparency, openness, feedback and change You have a passion for learning and sharing knowledge as well as a desire to create the right solutions for business problems Youre resilient in ambiguous situations and can approach challenges from multiple perspectives *For candidates with less than two years of experience,
Posted 3 weeks ago
30.0 - 35.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Senior Software Developers collaborate with Business and Quality Analysts, Designers, Project Managers and more to design software solutions that will create meaningful change for our clients. They listen thoughtfully to understand the context of a business problem and write clean and iterative code to deliver a powerful end result whilst consistently advocating for better engineering practices. By balancing strong opinions with a willingness to find the right answer, Senior Software Developers bring integrity to technology, ensuring all voices are heard. For a team to thrive, it needs collaboration and room for healthy, respectful debate. Senior Developers are the technologists who cultivate this environment while driving teams toward delivering on an aspirational tech vision and acting as mentors for more junior-level consultants. You will leverage deep technical knowledge to solve complex business problems and proactively assess your teams health, code quality and nonfunctional requirements. Job responsibilities You will learn and adopt best practices like writing clean and reusable code using TDD, pair programming and design patterns You will use and advocate for continuous delivery practices to deliver high-quality software as well as value to end customers as early as possible You will work in collaborative, value-driven teams to build innovative customer experiences for our clients You will create large-scale distributed systems out of microservices You will collaborate with a variety of teammates to build features, design concepts and interactive prototypes and ensure best practices and UX specifications are embedded along the way. You will apply the latest technology thinking from our to solve client problems You will efficiently utilize DevSecOps tools and practices to build and deploy software, advocating devops culture and shifting security left in development You will oversee or take part in the entire cycle of software consulting and delivery from ideation to deployment and everything in between You will act as a mentor for less-experienced peers through both your technical knowledge and leadership skills Job qualifications Technical Skills You have experience in Golang and are using one or more development languages (Java, Kotlin, JavaScript, TypeScript, Ruby, C#, etc.) with experience in Object-Oriented programming You can skillfully write high-quality, well-tested code and you are comfortable with Object-Oriented programming You are comfortable with Agile methodologies, such as Extreme Programming (XP), Scrum and/or Kanban You have a good awareness of TDD, continuous integration and continuous delivery approaches/tools Bonus points if you have working knowledge of cloud technology such as AWS, Azure, Kubernetes and Docker Professional Skills You enjoy influencing others and always advocate for technical excellence while being open to change when needed Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more Youre resilient in ambiguous situations and can approach challenges from multiple perspectives
Posted 3 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
Gurugram
Work from Office
We are seeking a mid to senior-level Azure Cloud Engineer to deliver cloud engineering services to Rackspaces Enterprise clients. The ideal candidate will have strong, hands-on technical skills, with the experience and consulting skills to understand, shape, and deliver against our customers requirements. Design and implement Azure cloud solutions that are secure, scalable, resilient, monitored, auditable, and cost optimized. Build out new customer cloud solutions using cloud-native components. Write Infrastructure as Code (IaC) using Terraform. Write application/Infra deployment pipeline using Azure DevOps or other industry-standard deployment and configuration tools. Usage of cloud foundational architecture and components to build out automated cloud environments, CI/CD pipelines, and supporting services frameworks. Work with developers to identify necessary Azure resources and automate their provisioning. Document automation processes. Create and document a disaster recovery plan. Strong communication skills along with customer-facing experience. Respond to customer support requests within response time SLAs. Troubleshoot performance degradation or loss of service as time-critical incidents. Ownership of issues, including collaboration with other teams and escalation. Participate in a shared on-call rotation. Support the success and development of others in the team. SKILLS & EXPERIENCE Engineer with 4-12 years of experience in the Azure cloud along with writing Infrastructure as Code (IaC) and building application/Infra deployment pipelines. Experienced in on-prem/AWS/GCP to Azure migration using tooling like Azure migrate etc. Expert-level knowledge of Azure Products & Services, Scaling, Load Balancing, etc. Expert level knowledge of Azure DevOps, Pipelines, and CI/CD. Expert level knowledge of Terraform and scripting (Python/Shell/PowerShell). Working knowledge in containerization technologies like Kubernetes, AKS, etc. Working knowledge of Azure networking, like VPN gateways, VNets, etc. Working knowledge of Windows or Linux operating systems experience with supporting and troubleshooting stability and performance issues. Working knowledge of automating the management and enforcement of policies using Azure Policies or similar. Good understanding of other DevOps tools like Ansible, Jenkins, etc. Good understanding of the design of native Cloud applications, Cloud application design patterns, and practices. Azure Admin, Azure DevOps, terraform certified candidates preferred.
Posted 3 weeks ago
15.0 - 20.0 years
16 - 20 Lacs
Gurugram, Bengaluru
Work from Office
Role Overview We are seeking a highly skilled and experienced Data Manager to lead the development, governance, and utilization of enterprise data systems. This is a strategic leadership role focused on ensuring seamless and secure flow of data across our platforms and teams, enabling timely and accurate access to actionable insights. The ideal candidate brings a strong foundation in data architecture, governance, and cloud-native systems, combined with hands-on experience managing cross-functional teams and implementing scalable, secure, and cost-efficient data solutions. Your Objectives Optimize data systems and infrastructure to support business intelligence and analytics. Implement best-in-class data governance, quality, and security frameworks. Lead a team of data and software engineers to develop, scale, and maintain cloud-native platforms. Support data-driven decision-making across the enterprise Key Responsibilities Develop and enforce policies for effective and ethical data management. Design and implement secure, efficient processes for data collection, storage, analysis, and sharing. Monitor and enhance data quality, consistency, and lineage. Oversee integration of data from multiple systems and business units. Partner with internal stakeholders to support data needs, dashboards, and ad hoc reporting. Maintain compliance with regulatory frameworks such as GDPR and HIPAA. Troubleshoot data-related issues and implement sustainable resolutions. Ensure digital data systems are secure from breaches and data loss. Evaluate and recommend new data tools, architectures, and technologies. Support documentation using Atlassian tools and develop architectural diagrams. Automate cloud operations using infrastructure as code (e.g., Terraform) and DevOps practices. Facilitate inter-team communication to improve data infrastructure and eliminate silos. Leadership & Strategic Duties Manage, mentor, and grow a high-performing data engineering team. Lead cross-functional collaboration with backend engineers, architects, and product teams. Facilitate partnerships with cloud providers (e.g., AWS) to leverage cutting-edge technologies. Conduct architecture reviews, PR reviews, and drive engineering best practices. Collaborate with business, product, legal, and compliance teams to align data operations with enterprise goals. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. 1015 years of experience in enterprise data architecture, governance, or data platform development. Expertise in SQL, data modelling, and modern data tools (e.g., Snowflake, dbt, Fivetran). Deep understanding of AWS cloud services (Lambda, ECS, RDS, DynamoDB, S3, SQS). Proficient in scripting (Python, Bash) and CI/CD pipelines. Demonstrated experience with ETL/ELT orchestration (e.g., Airflow, Prefect). Strong understanding of DevOps, Terraform, containerization, and serverless computing. Solid grasp of data security, compliance, and regulatory requirements Preferred Experience (Healthcare Focused) Experience working in healthcare analytics or data environments. Familiarity with EHR/EMR systems such as Epic, Cerner, Meditech, or Allscripts. Deep understanding of healthcare data privacy, patient information handling, and clinical workflows Soft Skills & Team Fit Strong leadership and mentoring mindset. Ability to manage ambiguity and work effectively in dynamic environments. Excellent verbal and written communication skills with technical and non-technical teams. Passionate about people development, knowledge sharing, and continuous learning. Resilient, empathetic, and strategically focused.
Posted 3 weeks ago
3.0 - 8.0 years
10 - 15 Lacs
Gurugram
Work from Office
As an L2 AWS Support Engineer, you will be responsible for providing advanced technical support for AWS-based solutions. You will troubleshoot and resolve complex technical issues, including those related to networking, security, and automation. Key Responsibilities: Develop, manage, and optimize CI/CD pipelines using tools like Jenkins and Opsera. Automate infrastructure provisioning using Terraform and CloudFormation. Administer and optimize key AWS services, including EC2, S3, RDS, Lambda, and IAM. Strengthen security by implementing best practices for IAM, encryption, and network security (VPC, Security Groups, WAF, NACLs, etc.). Design, configure, and maintain AWS networking components such as VPCs, Subnets, Route53, Transit Gateway, and Security Groups. Advanced Troubleshooting: Investigate and resolve issues related to networking (VPC, subnets, security groups) and storage. Analyze and fix application performance issues on AWS infrastructure. Cluster Management: Create and manage EKS clusters using AWS Management Console, AWS CLI, or Terraform. Manage Kubernetes resources such as namespaces, deployments, and services. Backup & Recovery: Configure and verify backups, snapshots, and disaster recovery plans. Perform DR drills as per defined procedures. Optimization: Monitor and optimize AWS resource utilization and costs. Suggest improvements for operational efficiency. Technical Skills: Advanced understanding of AWS core services (EC2, S3, VPC, IAM, Lambda, etc.) Strong knowledge of AWS automation, scripting (Bash, Python, PowerShell), and CLI. Experience with AWS CloudFormation and Terraform. Understanding of AWS security best practices and identity and access management. Migration and Modernization: Assist with migrating workloads to AWS and modernizing existing infrastructure. Performance Optimization: Analyze AWS resource usage and identify optimization opportunities. Cost Optimization: Implement cost-saving measures, such as rightsizing instances and using reserved instances. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work independently and as part of a team. Customer-focused approach. Certifications (Preferred): AWS Certified Solutions Architect - Associate AWS Certified DevOps Engineer Professional
Posted 3 weeks ago
3.0 - 8.0 years
18 - 22 Lacs
Bengaluru
Work from Office
As a Software Development Engineer at Infinera you will work in a cross-functional, agile team developing embedded software products. You will work with most of the company's product portfolio which leads to a quick and good overall system understanding. Roles & Responsibilities Develop and own L1 application(control path and data-path), related device driver software and features working closely with requirements and customer account teams with deep customer focus. Understand, drive and develop system wide impact features from architecture, design to delivery. Details about the work Understanding on some of the below topics is valuable as these skills will be directly usable. L1 application SW area: Software system design, inter-process communication, threading and other OS concepts. Device driver area: Boot process on X86 processors with multi OS support, uboot, coreboot. Some experience with BSPs and board provisioning/bring-up. PCI, PCIe, SPI, DMA and I2c protocols. BCM switch programming. IP Stack drivers working knowledge, io-pkt driver. Experience from automated testing in SW development environment We have the opportunity for you to become a systems engineer in the Embedded space and much more. About the team Team is also responsible for designing E2E solutions for communications frameworks and data-path setups spanning across Digital (packet) and Optical (channels) areas. We adopt smart and latest technologies to ensure we keep pace with the technology world devising efficient solutions. We have complete ownership and hence responsibility on how a solution is to be devised and implemented. It could be home grown or from 3rd party application pulls finally ending up in customizing these to suite our customers needs. We go the way to facing and resolving customer queries and resolving customer issues being directly involved with the customer live issues. The team takes full responsibility that a new feature is delivered on time with the right quality using state-of-the-art continuous integration pipelines. We strive for fully automated test suites following TDD. Education /Qualification Candidates must have a Bachelors Degree or higher from premier institutions. Expectations Stellar programming skills in one or more C, C++, golang, Shell scripting, Python. Some work experience in software development on embedded/Linux platforms is preferable, but we are open for you as long as your programming skills are right up there. Quick learner of software architecture and module design. Capacity to connect the dots in complex legacy code while developing new features.
Posted 3 weeks ago
6.0 - 11.0 years
18 - 22 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Overview We are seeking an experienced Data Architect with extensive expertise in designing and implementing modern data architectures. This role requires strong software engineering principles, hands-on coding abilities, and experience building data engineering frameworks. The ideal candidate will have a proven track record of implementing Databricks-based solutions in the healthcare industry, with expertise in data catalog implementation and governance frameworks. About the Role As a Data Architect, you will be responsible for designing and implementing scalable, secure, and efficient data architectures on the Databricks platform. You will lead the technical design of data migration initiatives from legacy systems to modern Lakehouse architecture, ensuring alignment with business requirements, industry best practices, and regulatory compliance. Key Responsibilities Design and implement modern data architectures using Databricks Lakehouse platform Lead the technical design of Data Warehouse/Data Lake migration initiatives from legacy systems Develop data engineering frameworks and reusable components to accelerate delivery Establish CI/CD pipelines and infrastructure-as-code practices for data solutions Implement data catalog solutions and governance frameworks Create technical specifications and architecture documentation Provide technical leadership to data engineering teams Collaborate with cross-functional teams to ensure alignment of data solutions Evaluate and recommend technologies, tools, and approaches for data initiatives Ensure data architectures meet security, compliance, and performance requirements Mentor junior team members on data architecture best practices Stay current with emerging technologies and industry trends Qualifications Extensive experience in data architecture design and implementation Strong software engineering background with expertise in Python or Scala Proven experience building data engineering frameworks and reusable components Experience implementing CI/CD pipelines for data solutions Expertise in infrastructure-as-code and automation Experience implementing data catalog solutions and governance frameworks Deep understanding of Databricks platform and Lakehouse architecture Experience migrating workloads from legacy systems to modern data platforms Strong knowledge of healthcare data requirements and regulations Experience with cloud platforms (AWS, Azure, GCP) and their data services Bachelor's degree in computer science, Information Systems, or related field; advanced degree preferred Technical Skills Programming languages: Python and/or Scala (required) Data processing frameworks: Apache Spark, Delta Lake CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Infrastructure-as-code (optional): Terraform, CloudFormation, Pulumi Data catalog tools: Databricks Unity Catalog, Collibra, Alation Data governance frameworks and methodologies Data modeling and design patterns API design and development Cloud platforms: AWS, Azure, GCP Container technologies: Docker, Kubernetes Version control systems: Git SQL and NoSQL databases Data quality and testing frameworks Optional - Healthcare Industry Knowledge Healthcare data standards (HL7, FHIR, etc.) Clinical and operational data models Healthcare interoperability requirements Healthcare analytics use cases
Posted 3 weeks ago
7.0 - 12.0 years
12 - 17 Lacs
Gurugram, Bengaluru
Work from Office
Key Responsibilities Automate deployments utilizing custom templates and modules for customer environments on AWS. Architect AWS environment best practices and deployment methodologies. Create automation tools and processes to improve day to day functions. Educate customers on AWS and Rackspace best practices and architecture. Ensure the control, integrity, and accessibility of the cloud environment for the enterprise Lead Workload/Workforce Management and Optimization related tasks. Mentor and assist Rackers across the Cloud Function. Quality check development of technical training for all Rackers supporting Rackspace Supported CLOUD Products. Provide technical expertise underpinning communications targeting a range of stakeholders - from individual contributors to leaders across the business. Collaborate with Account Managers and Business Development Consultants to build strong customer relationships. Technical Expertise Experienced in solutioning and implementation of Green field projects leveraging IaaS, PaaS for Primary site and DR. Near expert knowledge of AWS Products & Services, Compute, Storage, Security, networking, etc. Proficient skills in at least one of the following languages: Python, Linux, Shell scripting. Proficient skills with git and git workflows. Excellent working knowledge of Windows or Linux operating systems experience of supporting and troubleshooting issues and performance. Highly skilled in Terraform/IaC, including CI/CD practices. Working knowledge in Kubernetes. Experience in designing, building, implementing, analysing, Migrating and troubleshooting highly available systems. Knowledge of at least one configuration management system such as Chef, Ansible, Puppet or any other such tools. Understanding of services and protocols, configuration, management, and troubleshooting of hosting environments, including web servers, databases, caching, and database services. Knowledge in the application of current and emerging network software and hardware technology and protocols. Skills Passionate about technology and has a desire to constantly expand technical knowledge. Detail-oriented in documenting information and able to own customer issues through resolution. Able to handle multiple tasks and prioritize work under pressure. Demonstrate sound problem-solving skills coupled with a desire to take on responsibility. Strong written and verbal communication skills, both highly technical and non-technical. Ability to communicate technical issues to nontechnical and technical audiences. Education Required Bachelors degree in Computer Science or equivalent degree. Certifications Requires all 3 Associate level Certificates in AWS or professional level certificate. Experience 7+ Years of total IT experience
Posted 3 weeks ago
5.0 - 10.0 years
19 - 25 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
As a full spectrum integrator, we assist hundreds of companies to realize the value, efficiency, and productivity of the cloud. We take customers on their journey to enable, operate, and innovate using cloud technologies from migration strategy to operational excellence and immersive transformation. If you like a challenge, youll love it here, because were solving complex business problems every day, building and promoting great technology solutions that impact our customers success. The best part is, were committed to you and your growth, both professionally and personally. You will be part of a team designing, automating, and deploying services on behalf of our customers to the cloud in a way that allows these services to automatically heal themselves if things go south. We have deep experience applying cloud architecture techniques in virtually every industry. Every week is different and the problems you will be challenged to solve are constantly evolving. We build solutions using infrastructure-as-code so our customers can refine and reuse these processes again and again - all without having to come back to us for additional deployments. Key Responsibilities Create well-designed, documented, and tested software features that meet customer requirements. Identify and address product bugs, deficiencies, and performance bottlenecks. Participate in an agile delivery team, helping to ensure the technical quality of the features delivered across the team, including documentation, testing strategies, and code. Help determine technical feasibility and solutions for business requirements. Remain up-to-date on emerging technologies and architecture and propose ways to use them in current and upcoming projects. Leverage technical knowledge to cut scope while maintaining or achieving the overall goals of the product. Leverage technical knowledge to improve the quality and efficiency of product applications and tools. Willingness to travel to client locations and deliver professional services Qualifications Experience developing software in GCP, AWS, or Azure 5+ yrs experience developing applications in Java 3+ years required with at least one other programming language such as , Scala, Python, Go, C#, Typescript, Ruby. Experience with relational databases, including designing complex schemas and queries. Experience developing within distributed systems or a microservice based architecture. Strong verbal and written communication skills for documenting workflows, tools, or complex areas of a codebase. Ability to thrive in a fast-paced environment and multi-task efficiently. Strong analytical and troubleshooting skills. 3+ years of experience as a technical specialist in Customer-facing roles Experience with Agile development methodologies Experience with Continuous Integration and Continuous Delivery (CI/CD) Preferred Qualifications Experience with GCP Building applications using Container and serverless technologies Cloud Certifications Good exposure to Agile software development and DevOps practices such as Infrastructure as Code (IaC), Continuous Integration and automated deployment Exposure to Continuous Integration (CI) tools (e.g. Jenkins) Strong practical application development experience on Linux and Windows-based systems Experience working directly with customers, partners or third-party developers Location- Remote,Bangalore,Gurgaon,Hyderabad
Posted 3 weeks ago
5.0 - 7.0 years
50 - 80 Lacs
Bengaluru
Work from Office
Serko is a cutting-edge tech platform in global business travel & expense technology. When you join Serko, you become part of a team of passionate travellers and technologists bringing people together, using the world’s leading business travel marketplace. We are proud to be an equal opportunity employer, we embrace the richness of diversity, showing up authentically to create a positive impact. There's an exciting road ahead of us, where travel needs real, impactful change. With offices in New Zealand, Australia, North America, and China, we are thrilled to be expanding our global footprint, landing our new hub in Bengaluru, India. With rapid a growth plan in place for India, we’re hiring people from different backgrounds, experiences, abilities, and perspectives to help us build a world-class team and product. As a Principal Engineer based in our Bengaluru office, you’ll help shape the technical direction of our products while working closely with engineering and product leaders. This is a key role for someone who enjoys solving complex problems, influencing architecture, and mentoring others—all while staying close to the code. Requirements You’ll work across teams, collaborating with engineers, architects, and product stakeholders to help deliver scalable, user-focused solutions in a high-growth, collaborative environment. What You’ll Do Contribute to setting and evolving technical direction across product streams in partnership with senior engineers, architects, and product teams Champion engineering best practices—focusing on code quality, performance, maintainability, and security Lead by example, writing clean, efficient, and scalable code while guiding others through design and review processes Identify and help resolve cross-team technical challenges, keeping delivery on track and aligned with architectural goals Collaborate closely with Product Managers and Designers to deliver impactful features that solve real user problems Explore new technologies and suggest ways we can apply them to improve our platform and processes Mentor engineers within your team and across streams, fostering a culture of growth, ownership, and collaboration Stay current on industry trends, and contribute to continuous improvement in how we build, test, and deliver software What You’ll Bring Strong hands-on experience in modern technologies relevant to your stream (e.g. Java, Kotlin, .NET, React, AWS, or similar) A solid grasp of software architecture, system design, and performance considerations in production environments Demonstrated experience solving complex engineering challenges in a collaborative, team-based setup A pragmatic approach to problem-solving—balancing short-term needs with long-term scalability and maintainability Clear, effective communication skills and a collaborative working style Experience mentoring or guiding engineers through design and development Familiarity with agile software development and CI/CD practices A degree in Computer Science, Engineering, or a related field—or equivalent practical experience Benefits Benefits At Serko we aim to create a place where people can come and do their best work. This means you’ll be operating in an environment with great tools and support to enable you to perform at the highest level of your abilities, producing high-quality, and delivering innovative and efficient results. Our people are fully engaged, continuously improving, and encouraged to make an impact. Some of the benefits of working at Serko are: A competitive base pay Medical Benefits Discretionary incentive plan based on individual and company performance Focus on development: Access to a learning & development platform and opportunity for you to own your career pathways Flexible work policy. Apply Hit the ‘apply’ button now, or explore more about what it’s like to work at Serko and all our global opportunities at www.Serko.com .
Posted 3 weeks ago
5.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
We are looking for a skilled Oracle PCF Engineer with hands-on experience in the design, integration, deployment, and support of Policy and Charging Function (PCF) solutions in 4G/5G networks. The ideal candidate should have solid expertise in Oracle Communications PCRF/PCF , strong telecom domain knowledge, and the ability to troubleshoot complex policy-related network issues in real-time. Roles and Responsibilities Design, configure, and deploy Oracle PCF solutions across 4G/5G network environments. Perform integration with other core network elements such as CHF, SMF, AMF, and UDR. Implement and test policy rules as per service provider requirements. Collaborate with solution architects, testers, and system integrators for end-to-end service flow validation . Monitor and analyze PCF performance using tools and logs; perform root cause analysis for incidents. Provide L2/L3 support , including post-deployment troubleshooting and upgrades. Participate in capacity planning, software patching, and lifecycle management of PCF platforms. Maintain high availability and redundancy of the PCF nodes. Prepare and maintain technical documentation , configurations, and operational procedures. Work with cross-functional teams (Core, OSS/BSS, Cloud, Security) to align PCF behavior with network policies. Primary Skills In-depth knowledge of 3GPP standards for 4G LTE and 5G SA/NSA networks, particularly for Policy and Charging Control (PCC) architecture. Experience with Oracle Communications Policy Management (OCPM) Strong understanding of network functions like SMF, AMF, CHF, UDR , and interfaces such as N7, N15, Gx, Rx, Sy . Proficiency in Linux/Unix systems , shell scripting, and system-level debugging. Familiarity with Diameter, HTTP/2, and REST-based protocols . Experience in working with Kubernetes-based deployments , VNFs, or CNFs. Exposure to CI/CD tools , automation frameworks, and telecom service orchestration Good to Have: Experience with 5GC cloud-native deployments (e.g., on OCI, AWS, or OpenStack). Familiarity with Oracle OCI or other telecom cloud platforms . Knowledge of 5G QoS models , slice management , and dynamic policy handling . Experience with monitoring tools like Prometheus, Grafana, or ELK stack. Prior involvement in DevOps or SRE practices within telecom environments.
Posted 3 weeks ago
3.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Site Reliability Engineer Keep Planet-Scale Systems Reliable, Secure, and Fast (On-site only) At Ajmera Infotech , we build planet-scale platforms for NYSE-listed clients from HIPAA-compliant health systems to FDA-regulated software that simply cannot fail. Our 120+ elite engineers design, deploy, and safeguard mission-critical infrastructure trusted by millions. Why You’ll Love It Dev-first SRE culture automation, CI/CD, zero-toil mindset TDD, monitoring, and observability baked in not bolted on Code-first reliability script, ship, and scale with real ownership Mentorship-driven growth with exposure to regulated industries (HIPAA, FDA, SOC2) End-to-end impact own infra across Dev and Ops Requirements Key Responsibilities Architect and manage scalable, secure Kubernetes clusters (k8s/k3s) in production Develop scripts in Python, PowerShell, and Bash to automate infrastructure operations Optimize performance, availability, and cost across cloud environments Design and enforce CI/CD pipelines using Jenkins, Bamboo, GitHub Actions Implement log monitoring and proactive alerting systems Integrate and tune observability tools like Prometheus and Grafana Support both development and operations pipelines for continuous delivery Manage infrastructure components including Artifactory, Nginx, Apache, IIS Drive compliance-readiness across HIPAA, FDA, ISO, SOC2 Must-Have Skills 3 8 years in SRE or infrastructure engineering roles Kubernetes (k8s/k3s) production experience Scripting: Python, PowerShell, Bash CI/CD tools: Jenkins, Bamboo, GitHub Actions Experience with log monitoring, alerting, and observability stacks Cross-functional pipeline support (Dev + Ops) Tooling: Artifactory, Nginx, Apache, IIS Performance, availability, and cost-efficiency tuning Nice-to-Have Skills Background in regulated environments (HIPAA, FDA, ISO, SOC2) Multi-OS platform experience Integration of Prometheus, Grafana, or similar observability platforms Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 3 weeks ago
3.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Data Analytics Engineer Databricks - Power mission-critical decisions with governed insight s Company: Ajmera Infotech Private Limited (AIPL) Location: Ahmedabad, Bangalore /Bengaluru, Hyderabad (On-site) Experience: 5 9 years Position Type: Full-time, Permanent Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains HIPAA, FDA, SOC 2 and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark , Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark , Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rules in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake, Unity Catalog. Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication) Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations, or similar data quality frameworks. BI diversity experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family.
Posted 3 weeks ago
3.0 - 7.0 years
3 - 7 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Your role We are looking for a technically skilled and detail-orientedHeadless Content & Asset Delivery Engineerto build and maintain scalable content pipelines usingAdobe Experience Manager (AEM) SaaSandAdobe Assets. This role will be instrumental in enabling modular, API-driven content delivery and real-time personalization across digital channels. In this role, you will play a key role in: Headless Content Pipeline Development Design and implement headless content delivery pipelines using AEM SaaS and Adobe Assets. Ensure content is structured for reuse, scalability, and performance across multiple endpoints. Component & Asset Architecture Develop and maintain modular CMS components and Digital Asset Management (DAM) structures. Establish best practices for metadata tagging, asset versioning, and content governance. Personalization Integration Integrate content delivery with personalization APIs to support contextual rendering based on user behavior and profile data. Collaborate with personalization and decisioning teams to align content logic with targeting strategies. Workflow Automation Automate content publication workflows, including metadata enrichment, asset delivery, and approval processes. Leverage AEM workflows and scripting to streamline operations and reduce manual effort. Your profile AEM as a Cloud Service (SaaS) Digital Asset Management (DAM) Personalization API Integration Workflow Automation CI/CD & DevOps for Content What you'll love about working here You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work onin tech and engineering with industry leaders or createto overcome societal and environmental challenges. Location - Gurgaon, Hyderabad, Pune, Kolkata, Chennai (ex Madras), Mumbai (ex Bombay), Bangalore, Kolkata (ex Calcutta), Gandhinagar, Noida
Posted 3 weeks ago
5.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
About The Role 5+ years of experience in Devops Tools and container orchestration platforms like Docker , Kubernetes. Experienced in Linux Environment, Jenkins, CICD Integrated Pipeline with python scripting concepts. Experience working with GitHub repositories, managing pull requests, branching strategies, GitHub Enterprise and automation using GitHub APIs or GitHub CLI. Primary Skills Devops Tools, CI/CD Integrated Pipeline, Docker, Kubernetes, Linux environment ,Python scripting and Jenkins
Posted 3 weeks ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Kubernetes Engineer Build bulletproof infrastructure for regulated industries At Ajmera Infotech , we're building planet-scale software for NYSE-listed clients with a 120+ strong engineering team. Our work powers mission-critical systems in HIPAA, FDA, and SOC2-compliant domains where failure is not an option. Why You’ll Love It Own production-grade Kubernetes deployments at real scale Drive TDD-first DevOps in CI/CD environments Work in a compliance-first org (HIPAA, FDA, SOC2) with code-first values Collaborate with top-tier engineers in multi-cloud deployments Career growth via mentorship , deep-tech projects , and leadership tracks Requirements Key Responsibilities Design, deploy, and manage resilient Kubernetes clusters (k8s/k3s) Automate workload orchestration using Ansible or custom scripting Integrate Kubernetes deeply into CI/CD pipelines Tune infrastructure for performance, scalability, and regulatory reliability Support secure multi-tenant environments and compliance needs (e.g., HIPAA/FDA) Must-Have Skills 3 8 years of hands-on experience in production Kubernetes environments Expert-level knowledge of containerization with Docker Proven experience with CI/CD integration for k8s Automation via Ansible , shell scripting, or similar tools Infrastructure performance tuning within Kubernetes clusters Nice-to-Have Skills Multi-cloud cluster management (AWS/GCP/Azure) Helm, ArgoCD, or Flux for deployment and GitOps Service mesh, ingress controllers, and pod security policies Benefits Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 3 weeks ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY CI/CD Pipeline Engineer Build mission-critical release pipelines for regulated industries (On-site only) At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture we build defensible code from day one Code-first pipelines GitHub Actions, Octopus, IaC principles Mentorship-driven growth senior engineers help level you up End-to-end ownership deploy what you build Audit-readiness baked in work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions , Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging , canary releases , rollback tactics Benefits Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 3 weeks ago
3.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Cloud Infrastructure Engineer Build the Backbone of Mission-Critical Software (On-site only) Ajmera Infotech is a planet-scale engineering firm powering NYSE-listed clients with a 120+ strong team of elite developers. We build fail-safe, compliant software systems that cannot go down and now, we’re hiring a senior cloud engineer to help scale our infrastructure to the next level. Why You’ll Love It Terraform everything Zero-click, GitOps-driven provisioning pipelines Hardcore compliance Build infrastructure aligned with HIPAA, FDA, and SOC2 Infra across OSes Automate for Linux, MacOS, and Windows environments Own secrets & state Use Vault, Packer, Consul like a HashiCorp champion Team of pros Collaborate with engineers who write tests before code Dev-first culture Code reviews, mentorship, and CI/CD by default Real-world scale Azure-first systems powering critical applications Requirements Key Responsibilities Design and automate infrastructure as code using Terraform, Ansible, and GitOps Implement secure secret management via HashiCorp Vault Build CI/CD-integrated infra automation across hybrid environments Develop scripts and tooling in PowerShell, Bash, and Python Manage cloud infrastructure primarily on Azure, with exposure to AWS Optimize for performance, cost, and compliance at every layer Support infrastructure deployments using containerization tools (e.g., Docker, Kubernetes) Must-Have Skills 3 8 years in infrastructure automation and cloud engineering Deep expertise in Terraform (provisioning, state management) Hands-on with HashiCorp Vault, Packer, and Consul Strong Azure experience Proficiency with Ansible and GitOps workflows Cross-platform automation: Linux, MacOS, Windows CI/CD knowledge for infra pipelines REST API usage for automation tasks PowerShell, Python, and Bash scripting Nice-to-Have Skills AWS exposure Cost-performance optimization experience in cloud environments Containerization for infra deployments (Docker, Kubernetes) Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru
Work from Office
DIATOZ SOLUTIONS PVT LTD is looking for DevOps Engineer Intern to join our dynamic team and embark on a rewarding career journey Collaborating with coworkers to conceptualize, develop, and release software. Conducting quality assurance to ensure that the software meets prescribed guidelines. Rolling out fixes and upgrades to software, as needed. Securing software to prevent security breaches and other vulnerabilities. Collecting and reviewing customers' feedback to enhance user experience. Suggesting alterations to workflow in order to improve efficiency and success. Pitching ideas for projects based on gaps in the market and technological advancements.
Posted 3 weeks ago
4.0 - 7.0 years
10 - 15 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a skilled Senior Data Engineer to join our Actimize Watch Data Analytics team. In this role, you will collaborate closely with the Data Science team, Business Analysts, SMEs to monitor and optimize the performance of machine learning models. You will be responsible for running various analytics on data stored in S3, using advanced Python techniques, generating performance reports & visualization in Excel, and showcasing model performance & stability metrics through BI tools such as Power BI and Quick Sight. How will you make an impact? Data Integration and Management: Design, develop, and maintain robust Python scripts to support analytics and machine learning model monitoring. Ensure data integrity and quality across various data sources, primarily focusing on data stored in AWS S3. Check the data integrity & correctness of various new customers getting onboarded to Actimize Watch Analytics and Reporting: Work closely with Data Scientists, BAs & SMEs to understand model requirements and monitoring needs. Perform complex data analysis as well as visualization using Jupyter Notebooks, leveraging advanced Python libraries and techniques. Generate comprehensive model performance & stability reports, showcase them in BI tools. Standardize diverse analytics processes through automation and innovative approaches. Model Performance Monitoring: Implement monitoring solutions to track the performance and drift of machine learning models in production for various clients. Analyze model performance over time and identify potential issues or areas for improvement. Develop automated alerts and dashboards to provide real-time insights into model health. Business Intelligence and Visualization: Create and maintain dashboards in BI tools like Tableau, Power BI and QuickSight to visualize model performance metrics. Collaborate with stakeholders to ensure the dashboards meet business needs and provide actionable insights. Continuously improve visualization techniques to enhance the clarity and usability of the reports. Collaboration and Communication: Work closely with cross-functional teams, including Data Scientists, Product Managers, Business Analysts and SMEs to understand requirements and deliver solutions. Communicate findings and insights effectively to both technical and non-technical stakeholders. Provide support and training to team members on data engineering and analytics best practices and tools. Have you got what it takes? 5 to 7 years of experience in data engineering, with a focus on analytics, data science and machine learning model monitoring. Proficiency in Python and experience with Jupyter Notebooks for data analysis. Strong experience with AWS services, particularly S3 and related data processing tools. Expertise in Excel for reporting and data manipulation. Hands-on experience with BI tools such as Tableau, Power BI and QuickSight. Solid understanding of machine learning concepts and model performance metrics. Strong Python & SQL skills for querying and manipulating large datasets. Excellent problem-solving and analytical skills. Ability to work in a fast-paced, collaborative environment. Strong communication skills with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Experience with other AWS services like S3, Glue as well as BI tools like QuickSight & PowerBI Familiarity with CI/CD pipelines and automation tools. Knowledge of data governance and security best practices. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7900 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 weeks ago
4.0 - 7.0 years
25 - 30 Lacs
Hyderabad
Work from Office
Fusion Plus Solutions Inc is looking for Senior DevOps Engineer to join our dynamic team and embark on a rewarding career journey Collaborating with coworkers to conceptualize, develop, and release software. Conducting quality assurance to ensure that the software meets prescribed guidelines. Rolling out fixes and upgrades to software, as needed. Securing software to prevent security breaches and other vulnerabilities. Collecting and reviewing customers' feedback to enhance user experience. Suggesting alterations to workflow in order to improve efficiency and success. Pitching ideas for projects based on gaps in the market and technological advancements.
Posted 3 weeks ago
2.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Role Objective Technical/Domain Skills (must) Software Engineer role will be collaborating with Data Scientists and AI Engineers to develop and integrate scalable backend services, tools for agents and intuitive user interfaces that support the deployment and usability of AI-driven applications. Python, RESTful APIs & Integration, Cloud Platform (Azure), SQL, CI/CD
Posted 3 weeks ago
1.0 - 5.0 years
11 - 12 Lacs
Hyderabad
Work from Office
Responsibilities Participating in the development of OnCall Analytics Solutions using safe-agile methodologies. Design and implementation of ETL changes as per the customer needs. Supporting the Design and Implementation of functionalities in OnCall Analytics Portal, Configuration Manager and Link Analysis web applications. Design and implementation of continuous integration and deployment Deliver the features (user stories) with utmost quality. Creating cloud offerings of the BI products. Provide ad-hoc analytics support and activities in a collaborative work environment. Should be able to deliver code supporting different versions of MSSQL and environment. Building & deploying custom components on ETL supporting different version. Education / Qualifications B.E, B.Tech, M.Tech (CSE) Advanced analytical and technical skills; MSSQL, SSIS mandatory Extensive hands-on experience on building ETL from scratch. Having knowledge on SSAS and PowerBi additional Query optimization techniques, Performance oriented design planning. Structuring of DataWarehouse efficiently Knowledgeable in Cloud, preferably Azure. Familiarity with build and CI tools. Strong communication skills Understand Cloud Concepts.
Posted 3 weeks ago
5.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
We are seeking a skilled DevOps Engineer with strong expertise in Microsoft Azure infrastructure and Azure DevOps practices. The ideal candidate will have hands-on experience building and deploying pipelines, managing virtual machine scale sets (VMSS), and working with agent pools. Proficiency in shell scripting, GitHub repository management, and excellent communication skills are essential. Experience working with large teams and handling releases will be an added advantage. Key Expectations: Proven experience managing Microsoft Azure infrastructure, preferably with an Azure Administrator certification, demonstrating deep knowledge of Azure services and best practices. Hands-on expertise with Azure DevOps, including building and deploying pipeline images, managing VM Scale Sets (VMSS), and configuring Agent Pools to support scalable and automated deployments. Strong scripting skills using Shell to automate infrastructure tasks, deployments, and monitoring processes efficiently. Proficient in using GitHub as a code repository for version control, branching strategies, and collaboration across distributed teams. Excellent verbal and written communication skills to effectively coordinate with development, QA, and operations teams, document processes, and report progress clearly. Experience working with large teams to coordinate and manage software releases, ensuring smooth deployment cycles and minimizing downtime. Ability to troubleshoot, debug, and resolve infrastructure and deployment issues promptly to maintain continuous integration and continuous delivery (CI/CD) pipelines. Familiarity with infrastructure as code (IaC) tools and practices (e.g., ARM templates, Terraform) is a plus and will enhance automation capabilities. #ProductEngineering Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City Job ID R-72436-1 Date posted 07/18/2025
Posted 3 weeks ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
Job Description: Mandatory Skills: AWS, CI/CD, Jenkins, Chef, Terraform Good to Have Skills: Scripting Skills Experience 8+ years Only
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |