Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 9 hours ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 9 hours ago
4.0 - 9.0 years
6 - 14 Lacs
Hyderabad
Work from Office
Title : .Net Developer(.net+openshift OR Kubernetes) | 4 to 12 years | Bengaluru & Hyderabad : Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 9 hours ago
12.0 - 17.0 years
14 - 19 Lacs
Mysuru
Work from Office
The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus
Posted 10 hours ago
3.0 - 8.0 years
5 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies.- Quickly identify, troubleshoot, and fix failures to minimize downtime.- To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations.- Strong understanding of cloud architecture and services.- Experience with application development frameworks and tools.- Familiarity with DevOps practices and CI/CD pipelines.- Ability to troubleshoot and resolve application issues efficiently.- Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB).- Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting.- Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS.- Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments.- Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital.- Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes.- Time Management - Efficiently managing time and prioritizing tasks is vital in operations support.- The candidate should have minimum 3 years of experience in AWS Operations. Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 10 hours ago
1.0 - 4.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Title : Front End Developer, React.js, Typescript, Git, CSS, CI/CD, Kubernetes, AWS Develop and maintain user-friendly web applications with React.js.Write clean, maintainable, and efficient code using HTML, CSS, JavaScript (ES6+), and TypeScript.Work closely with UX/UI designers to bring mockups to life with responsive and accessible designs.Optimize applications for speed, scalability, and cross-browser compatibility.Implement and maintain front-end state management solutions such as Redux.Collaborate with back-end developers to integrate APIs and ensure smooth data flow.Debug and resolve front-end issues, improving performance and usability.Stay updated with the latest front-end technologies and industry trends. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 1-4 years of experience in front-end development. Strong proficiency in React.js and ecosystem tools. Experience with TypeScript. Proficiency in modern CSS frameworks like SCSS. Familiarity with version control systems like Git and CI/CD pipelines. Understanding of performance optimization techniques (lazy loading, caching, etc.). Knowledge of testing frameworks such as Cypress, or React Testing Library. Knowledge of monitoring tools (Prometheus) and logging frameworks. Experience with Agile methodologies and working in a collaborative team environment. Preferred technical and professional experience Knowledge of Opensource development, and working experience in Opensource projects. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Ability to work effectively in a collaborative, cross-functional team environment.
Posted 11 hours ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Platform and Product Team is shaping one of the key growth vector area for ZS, our engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. Platform and Product India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. Platform and Product India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Experience with cloud technologies AWS, Azure or GCP Create container images and maintain container registries. Create, update, and maintain production grade applications on Kubernetes clusters and cloud. Inculcate GitOps approach to maintain deployments. Create YAML scripts, HELM charts for Kubernetes deployments as required. Take part in cloud design and architecture decisions and support lead architects build cloud agnostic applications. Create and maintain Infrastructure-as-code templates to automate cloud infrastructure deployment Create and manage CI/CD pipelines to automate containerized deployments to cloud and K8s. Maintain git repositories, establish proper branching strategy, and release management processes. Support and maintain source code management and build tools. Monitoring applications on cloud and Kubernetes using tools like ELK, Grafana, Prometheus etc. Automate day to day activities using scripting. Work closely with development team to implement new build processes and strategies to meet new product requirements. Troubleshooting, problem solving, root cause analysis, and documentation related to build, release, and deployments. Ensure that systems are secure and compliant with industry standards. What You ll Bring A master s or bachelor s degree in computer science or related field from a top university. 2-4+ years of hands-on experience in DevOps Hands-on experience designing and deploying applications to cloud (Aws / Azure/ GCP) Expertise on deploying and maintaining applications on Kubernetes Technical expertise in release automation engineering, CI/CD or related roles. Hands on experience in writing Terraform templates as IaC, Helm charts, Kubernetes manifests Should have strong hold on Linux commands and script automation. Technical understanding of development tools, source control, and continuous integration build systems, e.g. Azure DevOps, Jenkins, Gitlab, TeamCity etc. Knowledge of deploying LLM models and toolchains Configuration management of various environments. Experience working in agile teams with short release cycles. Good to have programming experience in python / go. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.
Posted 3 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director Role Type: Individual Contributor
Posted 3 days ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 days ago
5.0 - 8.0 years
15 - 19 Lacs
Pune
Hybrid
So, what’s the role all about? Seeking a skilled and experienced DevOps Engineer in designing, producing, and testing high-quality software that meets specified functional and non-functional requirements within the time and resource constraints given. How will you make an impact? Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Occasional weekend or after-hours work as needed Have you got what it takes? Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. You will have an advantage if you also have: Prior experience in Development or Automation is a significant advantage. Windows system administration is a significant advantage. Experience with monitoring and log analysis tools is an advantage. Jenkins pipeline knowledge What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7318 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 days ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl’s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Responsibilities: Design, implement, and maintain scalable data pipelines using ELK Stack (Elasticsearch, Logstash, Kibana) and Beats for monitoring and analytics. Develop data processing workflows to handle real-time and batch data ingestion, transformation and visualization. Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Configure and optimize Elasticsearch clusters for efficient indexing, searching, and performance tuning. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Create dynamic and interactive dashboards in Kibana for data visualization and insights that can enable to detect the root cause of the issue. Leverage open-source tools such as Beats and Python to integrate and process data from multiple sources. Collaborate with cross-functional teams to implement ITSM solutions integrating ELK with tools like ServiceNow and other ITSM platforms. Anomaly detection using Elastic ML and create alerts using Watcher functionality Extract data by Python programming using API Build and deploy solutions in containerized environments using Kubernetes. Monitor Elasticsearch clusters for health, performance, and resource utilization Automate routine tasks and data workflows using scripting languages such as Python or shell scripting. Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in ELK Stack and Python programming Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. ELK Stack : Deep expertise in Elasticsearch, Logstash, Kibana, and Beats. Programming : Proficiency in Python for scripting and automation. ITSM Platforms : Hands-on experience with ServiceNow or similar ITSM tools. Containerization : Experience with Kubernetes and containerized applications. Operating Systems : Strong working knowledge of Windows, Linux, and AIX environments. Open-Source Tools : Familiarity with various open-source data integration and monitoring tools. Knowledge of network protocols, log management, and system performance optimization. Experience in integrating ELK solutions with enterprise IT environments. Strong analytical and problem-solving skills with attention to detail. Knowledge in MySQL or NoSQL Databases will be added advantage Fluent in English (written and spoken). Preferred Technical and Professional Experience “Elastic Certified Analyst” or “Elastic Certified Engineer” certification is preferrable Familiarity with additional monitoring tools like Prometheus, Grafana, or Splunk. Knowledge of cloud platforms (AWS, Azure, or GCP). Experience with DevOps methodologies and tools. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 days ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,
Posted 4 days ago
8.0 - 12.0 years
11 - 15 Lacs
Kochi
Work from Office
Job Title - Cloud Platform Engineer Associate Manager ACS Song Management Level:Level 8 Associate Manager Location:Kochi, Coimbatore, Trivandrum Must have skills:AWS, Terraform Good to have skills:Hybrid Cloud Experience:8-12 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Within our Cloud Platforms & Managed Services Solution Line, we apply an agile approach to provide true on-demand cloud platforms. We implement and operate secure cloud and hybrid global infrastructures using automation techniques for our clients business critical application landscape. As a Cloud Platform Engineer you are responsible for implementing on cloud and hybrid global infrastructures using infrastructure-as-code. Roles and Responsibilities Implement Cloud and Hybrid Infrastructures using Infrastructure-as-Code. Automate Provisioning and Maintenance for streamlined operations. Design and Estimate Infrastructure with an emphasis on observability and security. Establish CI/CD Pipelines for seamless application deployment. Ensure Data Integrity and Security through robust mechanisms. Implement Backup and Recovery Procedures for data protection. Build Self-Service Systems for enhanced developer autonomy. Collaborate with Development and Operations Teams for platform optimization. Professional and Technical Skills Customer-Focused Communicator adept at engaging cross-functional teams. Cloud Infrastructure Expert in AWS, Azure, or GCP. Proficient in Infrastructure as Code with tools like Terraform. Experienced in Container Orchestration (Kubernetes, Openshift, Docker Swarm). Skilled in Observability Tools like Prometheus, Grafana, etc., as well as Competent in Log Aggregation tools (Loki, ELK, Graylog) and Familiar with Tracing Systems such as Tempo. CI/CD and GitOps Savvy with potential knowledge of Argo-CD or Flux. Automation Proficiency in Bash and high-level languages (Python, Golang). Linux, Networking, and Database Knowledge for robust infrastructure management. Hybrid Cloud Experience a plus Additional Information About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3-5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 4 days ago
15.0 - 20.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : DevOps Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements in a fast-paced environment, ensuring seamless integration and functionality. Roles & Responsibilities:- Expected to be an SME, collaborate, and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the development and implementation of software solutions.- Collaborate with cross-functional teams to define, design, and ship new features.- Ensure the best possible performance, quality, and responsiveness of applications.- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in DevOps.- Strong understanding of continuous integration and continuous deployment (CI/CD) pipelines.- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.- Knowledge of containerization technologies such as Docker and Kubernetes.- Hands-on experience with monitoring and logging tools like Prometheus and ELK stack. Additional Information:- The candidate should have a minimum of 12 years of experience in DevOps.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 4 days ago
7.0 - 10.0 years
11 - 16 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams
Posted 4 days ago
7.0 - 10.0 years
8 - 13 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams
Posted 4 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director of Cloud Services Role Type: Individual Contributor
Posted 5 days ago
3.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Overview We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. Responsibilities We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills.
Posted 5 days ago
6.0 - 10.0 years
8 - 12 Lacs
Pune
Remote
What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will build core agent infrastructureA2A orchestration and MCP-driven tool discoveryso teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, Machine Learning What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Promote innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Experience working with technological innovations in AI & ML(esp. GenAI) and apply them. Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana.
Posted 6 days ago
5.0 - 8.0 years
6 - 9 Lacs
Pune
Remote
What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will have a blend of technical skills in the fields of AI & Machine Learning especially with LLMs and a deep-seated understanding of software development practices where you'll work with a team to ensure our systems are scalable, performant and accurate. You will be reporting to Senior Manager, AI/ML. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Inspire creativity by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful Bachelor's/Master's degree in computer science with 5+ years of industry experience in software development, along with experience building Machine Learning models and deploying them in production environments. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Work with technological innovations in AI & ML(esp. GenAI) Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, Grafana
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Noida
Work from Office
Senior Full Stack Engineer We are seeking a Senior Full Stack Engineer to design, build and scale a portfolio of cloud-native products including real-time speech-assessment tools, GenAI content services, and analytics dashboards used by customers worldwide. You will own end-to-end delivery across React/Next.js front-ends, Node/Python micro-services, and a MongoDB-centric data layer, all orchestrated in containers on Kubernetes, while championing multi-tenant SaaS best practices and modern MLOps. Role: Product & Architecture • Design multi-tenant SaaS services with isolated data planes, usage metering, and scalable tenancy patterns. • Lead MERN-driven feature work: SSR/ISR dashboards in Next.js, REST/GraphQL APIs in Node.js or FastAPI, and event-driven pipelines for AI services. • Build and integrate AI/ML & GenAI modules (speech scoring, LLM-based content generation, predictive analytics) into customer-facing workflows. DevOps & Scale • Containerise services with Docker, automate deployment via Helm/Kubernetes, and implement blue-green or canary roll-outs in CI/CD. • Establish observability for latency, throughput, model inference time, and cost-per-tenant across micro-services and ML workloads. Leadership & Collaboration • Conduct architecture reviews, mentor engineers, and promote a culture that pairs AI-generated code with rigorous human code review. • Partner with Product and Data teams to align technical designs with measurable business KPIs for AI-driven products. Required Skills & Experience • Front-End React 18, Next.js 14, TypeScript, modern CSS/Tailwind • Back-End Node 20 (Express/Nest) and Python 3.11 (FastAPI) • Databases MongoDB Atlas, aggregation pipelines, TTL/compound indexes • AI / GenAI Practical ML model integration, REST/streaming inference, prompt engineering, model fine-tuning workflows • Containerisation & Cloud Docker, Kubernetes, Helm, Terraform; production experience on AWS/GCP/Azure • SaaS at Scale Multi-tenant data isolation, per-tenant metering & rate-limits, SLA design • CI/CD & Quality GitHub Actions/GitLab CI, unit + integration testing (Jest, Pytest), E2E testing (Playwright/Cypress) Preferred Candidate Profile • Production experience with speech analytics or audio ML pipelines. • Familiarity with LLMOps (vector DBs, retrieval-augmented generation). • Terraform-driven multi-cloud deployments or FinOps optimization. • OSS contributions in MERN, Kubernetes, or AI libraries. Tech Stack & Tooling - React 18 • Next.js 14 • Node 20 • FastAPI • MongoDB Atlas • Redis • Docker • Kubernetes • Helm • Terraform • GitHub Actions • Prometheus + Grafana • OpenTelemetry • Python/Rust micro-services for ML inference
Posted 6 days ago
1.0 - 3.0 years
3 - 7 Lacs
Thane
Work from Office
Role & responsibilities : Deploy, configure, and manage infrastructure across cloud platforms like AWS, Azure, and GCP. Automate provisioning and configuration using tools such as Terraform. Design and maintain CI/CD pipelines using Jenkins, GitLab CI, or CircleCI to streamline deployments. Build, manage, and deploy containerized applications using Docker and Kubernetes. Set up and manage monitoring systems like Prometheus and Grafana to ensure performance and reliability. Write scripts in Bash or Python to automate routine tasks and improve system efficiency. Collaborate with development and operations teams to support deployments and troubleshoot issues. Investigate and resolve technical incidents, performing root cause analysis and implementing fixes. Apply security best practices across infrastructure and deployment workflows. Maintain documentation for systems, configurations, and processes to support team collaboration. Continuously explore and adopt new tools and practices to improve DevOps workflows.
Posted 6 days ago
2.0 - 5.0 years
1 - 6 Lacs
Noida, Hyderabad
Work from Office
We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karntaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git
Posted 6 days ago
4.0 - 7.0 years
5 - 9 Lacs
Noida
Work from Office
Proficiency in Go programming language (Golang). Solid understanding of RESTful API design and microservices architecture. Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Redis). Familiarity with container technologies (Docker, Kubernetes). Understanding of distributed systems and event-driven architecture. Version control with Git. Familiarity with CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Experience with message brokers (Kafka, RabbitMQ). Knowledge of GraphQL. Exposure to performance tuning and profiling. Contributions to open-source projects or personal GitHub portfolio. Familiarity with monitoring tools (Prometheus, Grafana, ELK). Roles and Responsibilities Design, develop, and maintain backend services and APIs using Go (Golang). Write efficient, scalable, and reusable code. Collaborate with front-end developers, DevOps engineers, and product teams to deliver high-quality features. Optimize applications for performance and scalability. Develop unit and integration tests to ensure software quality. Implement security and data protection best practices. Troubleshoot and debug production issues. Participate in code reviews, architecture discussions, and continuous improvement processes.
Posted 6 days ago
1.0 - 3.0 years
10 - 15 Lacs
Pune, Bengaluru
Work from Office
Must have a minimum 1 yr exp in SRE (CloudOps), Google Cloud platforms (GCP), monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty, Hands-on experience with Kubernetes for orchestration and container mgt Required Candidate profile Mandatory expreience working in B2C Product Companies. Must have Experience with CI/CD tools e.g. (Jenkins, GitLab CI/CD, CircleCI TravisCI..)
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2