Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview The Java Support Analyst is responsible for maintaining, troubleshooting, and optimizing enterprise Java applications . This role involves incident resolution, performance tuning, API troubleshooting, database optimization, and CI/CD deployment support . The analyst will work in an Agile, DevOps-driven environment and support legacy modernization, application enhancements, stabilization, and performance improvements for mission-critical applications in Freight, Rail, and Logistics industries. Required Technical Skills 🔹 Java, Spring Boot, Hibernate, JPA, REST APIs, Microservices 🔹 Database performance tuning (Oracle, MySQL, PostgreSQL, SQL Server, MongoDB) 🔹 CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps) 🔹 Cloud platforms (AWS, Azure, GCP) and containerized deployments (Docker, Kubernetes) 🔹 Monitoring tools (Splunk, ELK, Dynatrace, AppDynamics, New Relic) 🔹 Security frameworks (OAuth, JWT, SAML, SSL/TLS, LDAP, Active Directory) Key Responsibilities 1️ ⃣ Incident & Problem Management ✅ Provide Level 2/3 support for Java applications, resolving production issues, API failures, and backend errors. ✅ Diagnose and troubleshoot Java-based application crashes, memory leaks, and performance bottlenecks . ✅ Analyze logs using Splunk, ELK Stack, Dynatrace, AppDynamics, or New Relic . ✅ Work with ITIL-based Incident, Problem, and Change Management processes. ✅ Perform root cause analysis (RCA) for recurring production issues and implement permanent fixes. 2️ ⃣ Java Application Debugging & Optimization ✅ Debug and analyze Java applications built on Spring Boot, Hibernate, and Microservices . ✅ Fix issues related to RESTful APIs, SOAP web services, JSON/XML parsing, and data serialization . ✅ Optimize Garbage Collection (GC), CPU, and memory utilization for Java applications. ✅ Work with Java profiling tools (JVisualVM, YourKit, JProfiler) to identify slow processes. ✅ Assist developers in resolving code-level defects and SQL performance issues . 3️ ⃣ API & Integration Support ✅ Troubleshoot REST APIs, SOAP services, and microservices connectivity issues . ✅ Monitor and debug API Gateway traffic (Kong, Apigee, AWS API Gateway, or Azure API Management) . ✅ Handle authentication and security for APIs using OAuth 2.0, JWT, SAML, and LDAP . ✅ Work on third-party system integrations with SAP, Salesforce, ServiceNow, or Workday. 4️ ⃣ Database Support & SQL Performance Tuning ✅ Analyze and optimize SQL queries, stored procedures, and indexing strategies . ✅ Troubleshoot deadlocks, connection pooling, and slow DB transactions in Oracle, PostgreSQL, MySQL, or SQL Server . ✅ Work with NoSQL databases like MongoDB, Cassandra, or DynamoDB for cloud-based applications. ✅ Manage ORM (Hibernate, JPA) configurations for efficient database transactions. 5️ ⃣ CI/CD & Deployment Support ✅ Support CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps . ✅ Work on Docker and Kubernetes-based deployments for Java applications. ✅ Assist in automated testing and validation before production releases. ✅ Troubleshoot deployment failures, rollback strategies, and hotfix releases . 6️ ⃣ Cloud & DevOps Support ✅ Monitor Java applications deployed on AWS, Azure, or GCP using CloudWatch, Azure Monitor, or Stackdriver . ✅ Support containerized deployments using Kubernetes, OpenShift, or ECS . ✅ Manage logging, monitoring, and alerting for cloud-native Java applications . ✅ Assist in configuring Infrastructure as Code (Terraform, Ansible, or CloudFormation) for DevOps automation. 7️ ⃣ Security & Compliance Management ✅ Ensure Java applications comply with security standards (GDPR, HIPAA, SOC 2, ISO 27001) . ✅ Monitor and mitigate security vulnerabilities using SonarQube, Veracode, or Fortify . ✅ Implement SSL/TLS security measures and API rate limiting to prevent abuse. 8️ ⃣ Collaboration & Documentation ✅ Work in Agile (Scrum/Kanban) environments for application support and bug fixes. ✅ Maintain technical documentation, troubleshooting guides, and runbooks . ✅ Conduct knowledge transfer sessions for junior support engineers.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description About the Role You’ll be a key contributor on our AI Engineering team, building and maintaining the production-grade microservices and APIs that power our GenAI products—chatbots, document pipelines, retrieval endpoints, and embedding search. Your primary focus will be delivering clean, well-tested code; robust API designs; and reliable CI/CD processes. Core Responsibilities API & Service Development Design and implement RESTful (and optionally gRPC) Python services using FastAPI, Flask, or Django Define clear API contracts (e.g. OpenAPI/Swagger) and maintain semantic versioning Production-Grade Code Quality Apply SOLID principles and clean-code practices to keep services modular and maintainable Perform regular refactoring to reduce technical debt and adhere to style guides (flake8, black) Drive thorough code reviews, enforcing best practices and design consistency Testing & Validation Adopt test-driven development: write and maintain unit, integration, and end-to-end tests with pytest Mock external dependencies (LLM clients, vector stores) to validate error handling and edge cases Ensure high test coverage and set up automated quality gates in CI pipelines CI/CD & Deployment Build and maintain CI/CD pipelines (GitHub Actions, Jenkins, or GitLab CI) that run tests, linting, security scans, and deployments Containerize services with Docker and deploy to Kubernetes (or serverless) environments Automate release/versioning workflows and rollback strategies for low-risk releases Collaboration & Documentation Partner with MLOps, Data Science, and UX/UI teams to integrate new model capabilities Maintain up-to-date design docs, API specs, and “getting started” guides for engineering peers Contribute to sprint planning, design reviews, and process improvements Required Qualifications Experience: 3–5 years building production Python services Frameworks: FastAPI, Flask, or Django for API development Asynchronous frameworks (AsyncIO, aiohttp) for high-concurrency endpoints APIs & Protocols: Strong REST experience; basic gRPC or streaming is a plus Testing: Proven TDD with pytest (unit/integration tests and mocks) CI/CD: Hands-on with GitHub Actions, Jenkins, GitLab CI, or equivalent Containers & Orchestration: Proficiency with Docker; experience deploying to Kubernetes or serverless Nice-to-Have & Growth Areas Familiarity with vector stores (Faiss, Pinecone, Weaviate) and embedding search integration Experience with WebSockets or SSE for real-time chat Exposure to message brokers (Kafka, RabbitMQ) for event-driven architectures Knowledge of feature-flagging, A/B testing, or experimentation platforms Experience in other languages like Java, C++ Soft Skills Problem Solver: Diagnoses and debugs complex issues across code, infra, and external services Communicator: Explains design trade-offs clearly to both technical and non-technical audiences Collaborator: Works effectively in cross-functional teams and helps peers level up Learner: Quickly adopts new tools and practices in the fast-moving GenAI landscape Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Job Requirements Architect & Lead Storage Subsystem Development: Design and lead implementation of Linux-based storage stack for embedded or server platforms. Define architecture for storage interfaces (eMMC, UFS, NVMe, SATA, SD, USB mass storage, etc.). Optimize for performance, power, and reliability on target SoC or platform. Driver Development & Integration: Develop and maintain Linux kernel drivers for storage devices and controllers. Ensure upstream alignment with mainline Linux or maintain vendor-specific forks as needed. Integrate vendor storage controller IPs and firmware. File System & Block Layer Expertise: Work with Linux file systems (ext4, f2fs, xfs, btrfs). Optimize storage stack performance using IO schedulers, caching strategies, and tuning. Reliability, Data Integrity & Power Resilience: Implement support for journaling, wear leveling (especially for flash), secure erase, and TRIM. Ensure data integrity during power loss (power-fail robustness). Work with hardware teams on power rail sequencing and power management integration. Cross-Functional Collaboration: Coordinate with SoC vendors, QA, product management, and firmware/hardware teams. Collaborate with bootloader, security, and OTA (Over-The-Air) update teams for seamless storage handling. Debugging & Performance Analysis: Use tools like blktrace, iostat, fio, perf, strace, and kernel logs for performance and issue analysis. Root cause field issues (e.g., storage corruption, I/O latency) across layers. Compliance & Validation: Validate storage against JEDEC/UFS/SD/USB/NVMe standards. Ensure support for secure boot, encrypted storage (dm-crypt, LUKS), and SELinux/AppArmor policies where needed. Mentorship & Leadership: Lead and mentor a team of kernel and platform developers. Conduct code reviews and establish best practices for Linux storage development. Work Experience Kernel Programming: Strong knowledge of Linux storage subsystems (block layer, VFS, I/O stack). Proficiency in C and kernel debugging techniques. Storage Protocols & Interfaces: Hands-on with eMMC, UFS, NVMe, USB mass storage, SATA, SPI-NAND/NOR, SDIO, etc. Understanding of storage standards (SCSI, AHCI, NVMe spec, JEDEC). Filesystems: Deep knowledge of ext4, f2fs, and familiarity with log-structured or flash-optimized file systems. Performance & Tuning: Expertise in tuning I/O performance and handling flash-specific issues (latency, endurance, etc.). Tools: blktrace, iostat, fio, perf, gdb, crash, etc. Security: Secure storage handling, key management, dm-verity/dm-crypt, rollback protection. Yocto/Build Systems (optional but useful): Understanding of build flows for embedded Linux using Yocto or Buildroot.
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Job Description CI/CD Pipeline Engineer — Build mission-critical release pipelines for regulated industries At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture — we build defensible code from day one Code-first pipelines — GitHub Actions, Octopus, IaC principles Mentorship-driven growth — senior engineers help level you up End-to-end ownership — deploy what you build Audit-readiness baked in — work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle — deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions , Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging , canary releases , rollback tactics Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave. High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 2 weeks ago
0 years
5 - 6 Lacs
Hyderābād
On-site
About SRE Team: Site Reliability Engineering (SREs) is responsible for keeping all production systems running efficiently including some bug fixing. SREs are a blend of pragmatic operators and software craftspeople that apply sound engineering principles, operational field, and mature automation to our operating environments and the P&G codebase. SREs specializes in systems (operating systems, storage subsystems, networking), while implementing standard processes for availability, reliability, and scalability, with multifaceted interests in algorithms and distributed systems. In this role, you'll be constantly learning, staying up to date with industry trends and new technologies in data solutions. You'll have the chance to work with a variety of tools and technologies, including big data platforms, machine learning frameworks, and data visualization tools, to build innovative and effective solutions. So, if you're passionate about the possibilities of data, and eager to make a real impact in the world of business, a career in SRE team might be just what you're looking for. Join us and become a part of the future of digital transformation. About P&G IT: Digital is at the core of P&G’s accelerated growth strategy. With this vision, IT in P&G is deeply embedded into every critical process across business organizations comprising 11+ category units globally crafting impactful value through Transformation, Simplification & Innovation. IT in P&G is sub-divided into teams that engage strongly for revolutionizing the business processes to deliver outstanding value & growth - Digital GTM, Digital Manufacturing, Marketing Technologist, Ecommerce, Data Sciences & Analytics, Data Solutions & Engineering, Product Supply. Responsibilities: As a Site Reliability Engineer (SRE) at P&G, you will play a crucial role in ensuring the reliability, availability, and performance of our production systems. Your role will blend software engineering principles with operational field to build scalable and highly available systems. You will collaborate with development and operations teams to implement automation, optimize costs, and troubsolve issueshey arise. Oversee and maintain the smooth operation of production systems, ensuring high availability and reliability. Lead post-incident reviews to identify improvements in processes and systems. Develop monitoring and observability dashboards and alerts to provide actionable insights into system health. Design and implement automation solutions for routine operational tasks to improve efficiency and reduce manual intervention. Develop and maintain automatic tests to ensure the quality and reliability of production systems. Analyze system performance and resource utilization to identify opportunities for cost optimization. Work with teams to implement best practices for prioritization and cost-efficient architecture. Participate in the change management process to facilitate flawless production deployments. Plan, execute, and supervise production deployments to ensure minimal downtime and service disruption. Collaborate with other teams to ensure accurate deployment strategies and rollback mechanisms are in place.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location Chennai Work from Office Experience Level 8 10 years Tier T2 We are seeking a highly skilled and experienced Senior Data Engineer to lead the design and development of scalable secure and high performance data pipelines hosted on a cloud platform The ideal candidate will have deep expertise in Databricks Data Fabric MDM Informatica and Unity Catalog and a strong foundation in data modelling software engineering and DevOps practices This role is critical to building a next generation healthcare data platform that will power advanced analytics operational efficiency and business innovation Key Responsibilities 1 Data Pipeline Design Development Translate business requirements into actionable technical specifications defining application components enhancement needs data models and integration workflows Design develop and optimize end to end data pipelines using Databricks and related cloud native tools Create and maintain detailed technical design documentation and provide accurate estimations for storage compute resources cost efficiency and operational readiness Implement reusable and scalable ingestion transformation and orchestration patterns for structured and unstructured data sources Ensure pipelines meet functional and non functional requirements such as latency throughput fault tolerance and scalability 2 Cloud Platform Architecture Build and deploy data solutions on Microsoft Azure Azure Fabric leveraging Data Lake Unity Catalog Integrate pipelines with Data Fabric and Master Data Management MDM platforms for consistent and governed data delivery Follow best practices in cloud security encryption access controls and identity management 3 Data Modeling Metadata Management Design robust and extensible data models supporting analytics AI ML and operational reporting Ensure metadata is cataloged documented and accessible through Unity Catalog and MDM frameworks Collaborate with data architects and analysts to ensure alignment with business requirements 4 DevOps CI CD Automation Adopt DevOps best practices for data pipelines including automated testing deployment monitoring and rollback strategies Work closely with platform engineers to manage infrastructure as code containerization and CI CD pipelines Ensure compliance with enterprise SDLC security and data governance policies 5 Collaboration Continuous Improvement Partner with data analysts and product teams to understand data needs and translate them into technical solutions Continuously evaluate and integrate new tools frameworks and patterns to improve pipeline performance and maintainability Key Skills Technologies Required Databricks Delta Lake Spark Unity Catalog Azure Data Platform Data Factory Data Lake Azure Functions Azure Fabric Unity Catalog for metadata and data governance Strong programming skills in Python SQL Experience with data modeling data warehousing and star snowflake schema design Proficiency in DevOps tools Git Azure DevOps Jenkins Terraform Docker Preferred Experience with healthcare or regulated industry data environments Familiarity with data security standards e g HIPAA GDPR
Posted 2 weeks ago
0 years
3 - 7 Lacs
Ahmedabad
On-site
Job Description CI/CD Pipeline Engineer — Build mission-critical release pipelines for regulated industries At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture — we build defensible code from day one Code-first pipelines — GitHub Actions, Octopus, IaC principles Mentorship-driven growth — senior engineers help level you up End-to-end ownership — deploy what you build Audit-readiness baked in — work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle — deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions , Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging , canary releases , rollback tactics Benefits Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave. High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 2 weeks ago
7.0 years
3 - 7 Lacs
Bhopal
On-site
Central Square Foundation: Founded in 2012, Central Square Foundation (CSF) is a non-profit philanthropic foundation working on the vision of ensuring quality school education for all children in India. We are driven by our mission to transform the school education system with a focus on improving children's learning outcomes, especially in low-income communities. As an organization, we aim to reduce learning poverty in India by working towards improving the outcomes of children in FLN. This involves working with: National and state governments to prioritize FLN as the most critical focus area for reform. Establishing project management units at the national and state levels to work on critical workstreams that impact classroom practice, teacher capacity building, state monitoring systems, and system assessments. Investing in strategic EdTech interventions to improve FLN both at home and in school. Developing public goods that can be adopted by any state government for free. Nurturing an FLN partner ecosystem in India to foster collective action in support of national and state governments. Continuously exploring solutions to improve governance, such as phone-based assessments and district project management units. To learn more about us and our work, please visit our website at www.centralsquarefoundation.org. "FLN State Reform" in Madhya Pradesh: CSF has developed a comprehensive playbook to support state governments in undertaking large-scale reform initiatives. This playbook involves close collaboration with the State Project Directors' offices and the development of a 5 to 7-year roadmap to fundamentally transform critical workstreams. You can find more information on this initiative by following the link provided here - Critical Workstreams for FLN. We have been working closely with the education departments of Uttar Pradesh, Madhya Pradesh, and Haryana since 2019. A version of our initial playbook in these states has also been adopted by the Ministry of Education when they launched the NIPUN Bharat mission. As of December 2024, we have achieved the following in all three states: Funded and operated a unique coalition in all three states, involving partner organizations such as The Education Alliance, Language and Learning Foundation, Room to Read, and Vikramshilla. Each state has members from some of these organizations working closely with CSF teams to support the state in all academic and administrative initiatives to improve FLN. Our coalition teams are well-established in the three states and have worked with the state for over three academic cycles as the primary FLN partners. Improved all FLN materials used in all classrooms in all three states. Our efforts have introduced structured pedagogy-based teacher's guides, student workbooks, and additional FLN materials such as big books, reading charts, and math kits in all classrooms. This was achieved by working closely with state SCERTs and receiving academic design support from our partners in the coalition. Introduced an assessment-informed instruction in all three states. This involves setting up formative and summative assessments with high-quality assessment items and a tracking mechanism to support teachers. Established comprehensive continuous teacher training programs in all three states. This includes 5-7 days of face-to-face training and approximately 20-30 hours of digital training for all FLN teachers in the state. Established a cadre of cluster-level mentors to visit schools monthly and provide instructional support to teachers and headmasters through classroom observations and spot tests. This support is facilitated using apps in all three states, which also enables the collection of valuable data to understand implementation across all schools. Developed a comprehensive foundational learning monitoring system in the three states. This involves multiple apps used by mentors, teachers, and other administrators like BEO and DIETs, as well as a dashboard where all critical KPIs are visualized for the entire state. Set up a monthly review structure where the state and districts review progress in FLN using the dashboard and take actions based on the data available from all classrooms. This review structure has been established in all three states. Supported all three states in conducting regular sample annual endline student FLN achievement surveys, coupled with monthly dipstick sample surveys. The survey results are used to set up district-level FLN goals and communicate them across the entire state delivery channels to drive accountability. We believe that significant progress has been made in all three states, as all major inputs have been implemented with strong reforms known to work. In the next few years, CSF is committed to raising resources and operating in these states with a focus on stabilizing the aforementioned inputs and then guiding all districts towards a situation where the majority of students achieve FLN competence by the time they cross grade 3. This would involve the following initiatives: Strengthening the project management units in each state and adopting districts to drive implementations through district project management units established by the government. Working closely with the Mission Director to continuously iterate the design of all academic inputs based on insights from the field. Additionally, influencing the state leadership to prevent any rollback of design changes already achieved. Collaborating with the Mission Director to improve the quality of data collected by mentors and other stakeholders regarding classroom observations and student assessments from all districts. Supporting all districts in understanding their progress with respect to the mission implementation and helping them develop action plans to achieve the mission's goals. Working closely with the State Mission Director to strengthen the district PMU by running FLN fellowships or placing CSF teams in select districts. Project Lead Role: FLN reforms in MP are managed by a State Reform Team within CSF. This team works from Bhopal, Lucknow, Delhi, and Panchkula. As we move into the deeper end of the NIPUN Bharat mission in these states, we are deepening our presence and action in districts. The Project Lead plays a vital role in driving state reform initiatives, to oversee a significant portfolio of work. This encompasses state-specific and central components, aligning with CSF's reform objectives across all three states. For example: Working with SCERT to develop all academic materials through material creation workshops, reviewing quality, finalizing print-ready materials, managing the printing and delivery process so that the materials are timely printed and delivered to all schools in the state. Managing the system-led assessment workstream for the state. This would involve developing a framework for annual sample-based baselines, spot assessments, and school-based assessments. Based on the framework, developing relevant assessment items and tech tools to conduct the assessments at scale, sampling and selecting the schools/students. Then onboarding assessors from the state, training them with tools, and actually monitoring the execution of data collection in all districts. Finally, analyzing the data and creating reports to be used by districts to evaluate the progress they are making. Typically, a project lead will manage 4-5 such workstreams and will be supported by 4-6 project managers, and in some workstreams, a full-stack team from a partner organization. The project lead works closely with the state project lead, leaders from other coalition partners, and the State mission director to co-create end states and execution plans for each workstream. Based on the end states and execution plan, the project lead delivers the workstream for the state What would make you a good fit for the role: Project Lead role in Bhopal is a perfect opportunity to understand how large-scale educational reform takes shape. If you are keen to participate and influence a state to embark on that journey, this is the perfect role for you. This role provides an opportunity for one to take on a complex project and set up a high-performing team with a span of 4-6 project managers to drive outcomes in the context of government reform. So, if you are transitioning from being an individual performer to a team lead, this could be a great role. Below are the skills that would be necessary for individuals to possess: Bachelor's degree required from a reputed university; Master's degree preferred. Prior experience working with government stakeholders is preferred. 6 to 9 years of post-qualification work experience, preferably with a government entity, with a superb project delivery and management track record. Ability to analyze complex problems, craft possible solutions and recommendations. Action-biased and strong planning skills, ability to set priorities, plan, and meet timelines. Excellent communication skills: oral and written, in both English and Hindi. Ability to build and maintain positive and collaborative relationships with government stakeholders. Ability to lead a team of young professionals and drive them to achieve outcomes. Prior exposure to the education sector, public/development sector, or consulting will be preferred. However, people with corporate experience but an interest in the education sector are also encouraged to apply. Mission-driven, optimistic, and enthusiastic, believing in achieving transformational change. Willingness to be based at the state site closer to the stakeholders and team. Openness to regular travel to Delhi and different districts in UP, MP, and Haryana. Compensation: Remuneration will be competitive with Indian philanthropy pay scales and will depend upon the candidate's experience levels.
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Date Opened 07/16/2025 Industry Technology Salary 24-30 LPA Job Type Full time State/Province Karnataka Zip/Postal Code 560048 City Bangalore Country India About Us At Innover, we endeavor to see our clients become connected, insight-driven businesses. Our integrated Digital Experiences, Data & Insights and Digital Operations studios help clients embrace digital transformation and drive unique outstanding experiences that apply to the entire customer lifecycle. Our connected studios work in tandem to reimagine the convergence of innovation, technology, people, and business agility to deliver impressive returns on investments. We help organizations capitalize on current trends and game-changing technologies molding them into future-ready enterprises. Take a look at how each of our studios represents deep pockets of expertise and delivers on the promise of data-driven, connected enterprises. Job Description Role Overview: We are looking for a hands-on Lead DevOps Engineer who can take ownership of designing, implementing, and managing scalable CI/CD and cloud infrastructure in Azure. This role demands strong technical expertise combined with leadership capabilities to drive DevOps initiatives, mentor team members, and enforce best practices across projects. Key Responsibilities: Lead the design and implementation of robust CI/CD pipelines using Azure DevOps and GitHub Actions. Drive and manage containerized deployments leveraging Azure Kubernetes Service (AKS). Develop and maintain Infrastructure as Code (IaC) using Bicep and ARM templates for scalable and secure infrastructure provisioning. Monitor pipeline health, troubleshoot failures, and implement automated rollback and recovery strategies. Collaborate closely with development, QA, and architecture teams to optimize release workflows. Act as a technical leader and mentor for the DevOps team, ensuring adherence to industry best practices and security standards. Proactively identify areas for automation, improvement, and cost optimization within the cloud infrastructure. Mandatory Skills and Experience: Proven hands-on experience in building and managing CI/CD pipelines using Azure DevOps and GitHub Actions. Strong experience in Azure Kubernetes Service (AKS) and container orchestration. Deep expertise in Infrastructure as Code (IaC) using Bicep and ARM templates. Ability to troubleshoot complex deployment and infrastructure issues across environments. Solid understanding of cloud security, scalability, and DevOps governance. Experience in leading DevOps teams, setting standards, and driving adoption of DevOps culture. Excellent communication, collaboration, and leadership skills.
Posted 2 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview The Java Support Analyst is responsible for maintaining, troubleshooting, and optimizing enterprise Java applications . This role involves incident resolution, performance tuning, API troubleshooting, database optimization, and CI/CD deployment support . The analyst will work in an Agile, DevOps-driven environment and support legacy modernization, application enhancements, stabilization, and performance improvements for mission-critical applications in Freight, Rail, and Logistics industries. Required Technical Skills 🔹 Java, Spring Boot, Hibernate, JPA, REST APIs, Microservices 🔹 Database performance tuning (Oracle, MySQL, PostgreSQL, SQL Server, MongoDB) 🔹 CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps) 🔹 Cloud platforms (AWS, Azure, GCP) and containerized deployments (Docker, Kubernetes) 🔹 Monitoring tools (Splunk, ELK, Dynatrace, AppDynamics, New Relic) 🔹 Security frameworks (OAuth, JWT, SAML, SSL/TLS, LDAP, Active Directory) Key Responsibilities 1️ ⃣ Incident & Problem Management ✅ Provide Level 2/3 support for Java applications, resolving production issues, API failures, and backend errors. ✅ Diagnose and troubleshoot Java-based application crashes, memory leaks, and performance bottlenecks . ✅ Analyze logs using Splunk, ELK Stack, Dynatrace, AppDynamics, or New Relic . ✅ Work with ITIL-based Incident, Problem, and Change Management processes. ✅ Perform root cause analysis (RCA) for recurring production issues and implement permanent fixes. 2️ ⃣ Java Application Debugging & Optimization ✅ Debug and analyze Java applications built on Spring Boot, Hibernate, and Microservices . ✅ Fix issues related to RESTful APIs, SOAP web services, JSON/XML parsing, and data serialization . ✅ Optimize Garbage Collection (GC), CPU, and memory utilization for Java applications. ✅ Work with Java profiling tools (JVisualVM, YourKit, JProfiler) to identify slow processes. ✅ Assist developers in resolving code-level defects and SQL performance issues . 3️ ⃣ API & Integration Support ✅ Troubleshoot REST APIs, SOAP services, and microservices connectivity issues . ✅ Monitor and debug API Gateway traffic (Kong, Apigee, AWS API Gateway, or Azure API Management) . ✅ Handle authentication and security for APIs using OAuth 2.0, JWT, SAML, and LDAP . ✅ Work on third-party system integrations with SAP, Salesforce, ServiceNow, or Workday. 4️ ⃣ Database Support & SQL Performance Tuning ✅ Analyze and optimize SQL queries, stored procedures, and indexing strategies . ✅ Troubleshoot deadlocks, connection pooling, and slow DB transactions in Oracle, PostgreSQL, MySQL, or SQL Server . ✅ Work with NoSQL databases like MongoDB, Cassandra, or DynamoDB for cloud-based applications. ✅ Manage ORM (Hibernate, JPA) configurations for efficient database transactions. 5️ ⃣ CI/CD & Deployment Support ✅ Support CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps . ✅ Work on Docker and Kubernetes-based deployments for Java applications. ✅ Assist in automated testing and validation before production releases. ✅ Troubleshoot deployment failures, rollback strategies, and hotfix releases . 6️ ⃣ Cloud & DevOps Support ✅ Monitor Java applications deployed on AWS, Azure, or GCP using CloudWatch, Azure Monitor, or Stackdriver . ✅ Support containerized deployments using Kubernetes, OpenShift, or ECS . ✅ Manage logging, monitoring, and alerting for cloud-native Java applications . ✅ Assist in configuring Infrastructure as Code (Terraform, Ansible, or CloudFormation) for DevOps automation. 7️ ⃣ Security & Compliance Management ✅ Ensure Java applications comply with security standards (GDPR, HIPAA, SOC 2, ISO 27001) . ✅ Monitor and mitigate security vulnerabilities using SonarQube, Veracode, or Fortify . ✅ Implement SSL/TLS security measures and API rate limiting to prevent abuse. 8️ ⃣ Collaboration & Documentation ✅ Work in Agile (Scrum/Kanban) environments for application support and bug fixes. ✅ Maintain technical documentation, troubleshooting guides, and runbooks . ✅ Conduct knowledge transfer sessions for junior support engineers.
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a hands-on and experienced AWS DevOps Engineer with 3-5 years of cloud infrastructure and automation expertise to join our team and take ownership of DevOps for our EasyWebinar product - a live webinar platform serving a global user base. You-ll be responsible for the management, security, scalability, cost optimization, and automation of our AWS infrastructure. This includes containerized applications on ECS, serverless functions on Lambda, Terraform-based automation, Docker, and full CI/CD lifecycle management. The ideal candidate has strong knowledge of Linux, AWS services, and modern DevOps practices. Key Responsibilities AWS Infrastructure Management & Cost Optimization Design, provision, and manage cloud infrastructure on AWS using ECS (Fargate /EC2), Lambda, RDS, EC2, S3, CloudFront, ELB, Route 53, and WAF Build secure and scalable VPC architectures, including NAT gateways, subnets, security groups, NACLs, and route tables Monitor usage and implement cost optimization strategies using AWS Trusted Advisor, Cost Explorer, and automation for unused resources Ensure high availability, disaster recovery, backup scheduling, and performance tuning for all infrastructure components Serverless & Lambda Management Deploy and maintain AWS Lambda functions, optimize performance (cold starts, timeouts, memory), and troubleshoot failures Integrate Lambda with event-driven services such as S3, SQS, SNS, and CloudWatch Events Apply secure IAM roles, environment isolation, and versioning to Lambda-based workloads Infrastructure as Code (IaC) Create and manage Terraform modules for provisioning and configuring all AWS services Maintain Git-based version control of infrastructure and enable automated promotion across environments (dev,staging, prod) CI/CD & Automation Implement and manage CI/CD pipelines using GitHub Actions, GitLab CI, or Azure Pipelines Automate build, test, and deployment for ECS microservices, Lambda functions, and S3-based static content Ensure smooth and rollback-safe deployments with appropriate approvals and validations Docker & Containerization Build and optimize Docker images for containerized applications Manage and troubleshoot container deployments in ECS (both EC2 and Fargate) Apply container lifecycle best practices, including multi-stage builds and secure image registries Security & Compliance Apply least-privilege access policies using IAM roles, policies, and MFA Enforce encryption, secure key storage, and API protection with WAF and HTTPS via CloudFront Regularly patch systems and monitor for vulnerabilities or misconfigurations Monitoring, Logging & Incident Management Use Amazon CloudWatch as the primary tool for metrics, logs, alarms, and dashboards Continuously improve logging standards and observability, including structured logging across Lambda, ECS, and EC2 Set up custom metrics and log-based alerts to detect errors, bottlenecks, and anomalies Build and maintain operational dashboards to monitor system health Participate in incident response, perform root cause analysis, and implement preventive actions Troubleshooting & Production Support Proactively monitor and investigate production issues across AWS, Docker, Lambda, ECS, RDS, and Linux-based systems Quickly identify the root cause of live issues and provide short-term mitigation and long-term fixes Collaborate with developers to debug application-layer and infrastructure-level problems Own and continuously improve incident response, resolution time, and reliability processes Linux System Administration Administer and troubleshoot Linux-based EC2 instances (Amazon Linux, Ubuntu) Handle OS-level performance tuning, networking, and patch management Create system-level scripts and automation for routine DevOps tasks Required Skills & Experience 3-5 years of experience in a DevOps or Cloud Engineering role, focused on AWS- Strong expertise in: AWS Lambda, including deployment, optimization, and event integration Core AWS services: ECS, EC2, RDS, S3, CloudFront, ELB, Route 53, WAF Terraform for infrastructure provisioning and automation CI/CD pipeline design using GitHub Actions, GitLab CI, or Azure Pipelines Docker: image creation, deployment, and ECS orchestration VPC networking: NAT gateways, subnets, route tables, SGs, and NACLs Linux administration and shell scripting Proven experience in AWS cost analysis and optimization Strong skills in production troubleshooting, log analysis, and root cause identification Nice to Have AWS Certification (e.g., DevOps Engineer - Associate, Solutions Architect - Associate) Experience with event-driven architectures using SNS, SQS, or Step Functions Exposure to real-time streaming or webinar platforms Familiarity with observability tools like Datadog, ELK Stack, or Prometheus
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CI/CD Pipeline Engineer — Build mission-critical release pipelines for regulated industries At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture — we build defensible code from day one Code-first pipelines — GitHub Actions, Octopus, IaC principles Mentorship-driven growth — senior engineers help level you up End-to-end ownership — deploy what you build Audit-readiness baked in — work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle — deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions, Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging, canary releases, rollback tactics
Posted 2 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
CI/CD Pipeline Engineer — Build mission-critical release pipelines for regulated industries At Ajmera Infotech , we engineer planet-scale software with a 120-strong dev team powering NYSE-listed clients. From HIPAA-grade healthcare systems to FDA-audited workflows, our code runs where failure isn't an option. Why You’ll Love It TDD/BDD culture — we build defensible code from day one Code-first pipelines — GitHub Actions, Octopus, IaC principles Mentorship-driven growth — senior engineers help level you up End-to-end ownership — deploy what you build Audit-readiness baked in — work in HIPAA, FDA, SOC2 landscapes Cross-platform muscle — deploy to Linux, MacOS, Windows Requirements Key Responsibilities Design and maintain CI pipelines using GitHub Actions (or Jenkins/Bamboo) Own build and release automation across dev, staging, and prod Integrate with Octopus Deploy (or equivalent) for continuous delivery Configure pipelines for multi-platform environments Build compliance-resilient workflows (SOC2, HIPAA, FDA) Manage source control (Git), Jira, Confluence, and build APIs Implement advanced deployment strategies: canary, blue-green, rollback Must-Have Skills CI expertise: GitHub Actions, Jenkins, or Bamboo Deep understanding of build/release pipelines Cross-platform deployment: Linux, MacOS, Windows Experience with compliance-first CI/CD practices Proficiency with Git, Jira, Confluence, API integrations Nice-to-Have Skills Octopus Deploy or similar CD tools Experience with containerized multi-stage pipelines Familiarity with feature flagging, canary releases, rollback tactics
Posted 2 weeks ago
5.0 years
4 - 9 Lacs
Noida
On-site
Posted On: 14 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data quality in production. Collaborate with data scientists to improve model robustness and deployment readiness. Ensure CI/CD practices for ML models using tools like Jenkins, GitHub Actions, or GitLab CI. Optimize compute resources and manage model versioning, reproducibility, and rollback strategies. Work with cloud platforms AWS and containerization tools like Kubernetes (AKS). Ensure compliance with data privacy and security standards (e.g., GDPR, HIPAA). Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps, Data Engineering, or ML Engineering roles. Strong programming skills in Python; familiarity with R, Scala, or Java is a plus. Experience with automating ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or XGBoost. Experience with ML model monitoring and alerting frameworks (e.g., Evidently, Prometheus, Grafana). Familiarity with data orchestration and ETL/ELT tools (Airflow, dbt, Prefect). Preferred Qualifications: Experience with large-scale data systems (Spark, Hadoop). Knowledge of feature stores (Feast, Tecton). Experience with streaming data (Kafka, Flink). Experience working in regulated environments (finance, healthcare, etc.). Certifications in cloud platforms or ML tools. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and collaboration with cross-functional teams. Adaptable and eager to learn new technologies. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Data Science and Machine Learning - Data Science and Machine Learning - Gen AI (LLM, Agentic AI, Gen AI enable tools like Github Copilot) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Other Programming Language - Scala Big Data - Big Data - Hadoop Big Data - Big Data - SPARK Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
We’re looking for a Senior JavaScript Engineer with deep experience in vanilla JS and A/B testing platforms like VWO or Optimizely . You’ll work closely with our marketing, product, and analytics teams to design and deploy high-impact experiments that improve performance and conversion. Location: Remote (Time zone- CST) 🔧 Key Skills We’re Looking For • Expert-level JavaScript (ES6+), especially in DOM manipulation without relying on frameworks • Experience with VWO, Optimizely, or similar A/B testing tools • Ability to write, inject, and debug dynamic front-end scripts • Strong grasp of event delegation, asynchronous operations (AJAX, fetch), and browser rendering • Proficiency with browser dev tools and debugging scripts • Deep understanding of CSS selectors, cross-browser compatibility, and browser quirks 💼 What You’ll Do • Write and maintain JavaScript snippets to modify website behaviour through VWO • Debug and fine-tune scripts to avoid conflicts and ensure performance • Collaborate with marketing/analytics to implement and monitor A/B tests • Ensure seamless rollouts and effective rollback mechanisms for all experiments 🌟 Nice to Have • Familiarity with Tag Managers (GTM, Tealium) • Understanding of UX principles and testing methodologies • Exposure to CRO (Conversion Rate Optimization) techniques 📢 Why Join Us? You’ll be part of a fast-paced, experimentation-driven team that values innovation, speed, and data-backed decisions. If you're passionate about clean code and love tweaking real-world UX through clever scripts, this is your playground.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Designation : Technology Lead Job Location : Bangalore Key Responsibilities To work as a techno-functional SME to provide technical troubleshooting and product support for customers using our products. Take ownership of user problems and be proactive when dealing with user issues. Lead day-to-day production support for applications running in GCP (or any other cloud service) & Kubernetes environments. Monitor application health, respond to incidents, and ensure timely resolution within SLAs. Act as the primary escalation point for high-severity incidents and coordinate with engineering and cloud teams. Drive root cause analysis (RCA), post-incident reviews, and long-term problem resolution. Oversee implementation and maintenance of monitoring, alerting, and logging tools. Maintain operational documentation, runbooks, and knowledge base for support teams. Implementing automation tools to streamline operations and reduce the frequency of errors. Mentor and guide L1/L2 support teams and ensure effective knowledge transfer. Technical Skills 1. Java & Application Layer Strong knowledge of Java/J2EE applications and microservices architecture Familiarity with REST APIs Experience with application performance troubleshooting and profiling 2. Kubernetes (K8s) Understanding of Kubernetes objects: Deployments, Pods, Services, ConfigMaps, Secrets, etc. Hands-on with kubectl, Kubernetes troubleshooting, and log analysis 3. GCP (Google Cloud Platform) Experience with key GCP services: GKE (Google Kubernetes Engine), Cloud Logging, Cloud Monitoring, Cloud SQL 5. Monitoring & Observability Tools: Datadog or any other similar tool Proficient in setting up alerts, dashboards, and log aggregation Root cause analysis from logs and metrics 6. Incident & Problem Management Strong skills in triaging production incidents and leading RCA efforts Familiar with ITIL practices (especially Incident, Change, Problem Management) Tools: Jira or any other similar tool Soft Skills & Leadership 1. Team Leadership Leading L1/L2/L3 support teams Incident bridge management, escalations handling Mentoring and upskilling junior team members 2. Stakeholder Communication Effective communication with developers, QA, DevOps, and business stakeholders Post-incident reporting and communication 3. Operational Excellence SLAs, SLOs, error budgets, service reliability Defining support processes, runbooks, and knowledge bases 4. Change Management Coordinating deployments, patches, hotfixes Managing go-live and rollback plans Nice to Have: Knowledge of SRE practices Familiarity with security & compliance in cloud environments (e.g., vulnerability scanning, IAM best practices) Automation using Python, Bash, or other scripting languages
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Job description Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, white-boarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers—owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off Eventual goal is to have Evallo run in a fully observable, autoscaling environments with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 5+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Why Join Aberdeen? At Aberdeen Broadcast Services , we are committed to providing accessible content for all through high-quality captioning, subtitling, and translation services. As a growing company in the broadcast and media space, we offer exciting opportunities to work on diverse projects while helping make media content inclusive for a global audience. Be Part of a Mission-Driven Team: Our work ensures that content reaches audiences who rely on accessible solutions. Professional Growth Opportunities: Gain hands-on experience with industry-standard software, workflows, and captioning guidelines while expanding your skill set. Work on Meaningful Content: From Christian-based and educational programs to corporate training and entertainment, your work will have a positive impact on a variety of audiences. Live Our Values: We believe in being Team Players , holding ourselves Accountable , and being Solution-Driven . Our culture encourages collaboration, responsibility, and innovation. Collaborative and Supportive Environment: Join a team that values accuracy, quality, and continuous improvement while supporting your professional journey. Role Overview We are hiring a Salesforce Developer/Engineer to join a high-performance, fast-paced product team of 4. This is not just another dev job—you’ll be part of a bold initiative to transform accessibility in education using Salesforce, AI, and real-world empathy. You’ll build deeply integrated systems and deliver user-centric solutions with real impact. The ideal candidate is not just experienced—they are proactive, self-directed, and actively using AI (like Agentforce, GitHub Copilot, Cody, etc.) to accelerate development, productivity and improve quality. You'll be responsible for full-stack Salesforce development, integration, and packaging, as well as deployment, testing, release management, and configuration, while collaborating tightly with product managers, designers, and other developers. Key Responsibilities Salesforce Development & Architecture Develop robust, scalable, and secure components using Apex, LWC, SOQL, and Flows Implement Salesforce Connected Apps integrated with ReactJS, NodeJS, and ExpressJS Design and manage custom objects, record types, layouts, validation rules, permission sets, and sharing models Build and extend a scalable Salesforce Managed Package for AppExchange distribution Build and customize Communities/Experience Cloud/Chatter sites for external stakeholders, with branded themes, secure access, and mobile responsiveness AI-Augmented Engineering Actively leverage AI tools like Agentforce, GitHub Copilot, or CodeWhisperer to streamline development, refactoring, and test generation Participate in experimentation with agent-driven automation workflows inside Salesforce DevOps, Deployment, and CI/CD Use Salesforce DX for scratch orgs, packaging, and modular code organization Manage version control and branching strategies with Git Implement CI/CD pipelines using tools like GitHub Actions, Bitbucket Pipelines, or Azure DevOps Handle deployments between sandboxes and production using change sets, unlocked packages, or CLI-based automation Own the release lifecycle, ensuring smooth rollouts and rollback plans Testing & Quality Assurance Write and maintain robust test classes with high code coverage (95%+) Implement unit tests, negative tests, and integration tests Perform peer code reviews and participate in test case reviews Troubleshoot and resolve bugs and deployment failures quickly Integration & API Management Design and implement integrations with external systems via REST APIs. Build Restful APIs in Salesforce for external systems to access. Work with middleware platforms like Mulesoft, Heroku, or custom webhooks Ensure data integrity and system sync across platforms (Salesforce, AWS, React apps) Collaboration & Communication Work closely with a small, agile team of developers in a remote-first environment Contribute to architecture decisions, story breakdown, and technical planning Participate in daily stand-ups, sprint planning, retrospectives, and design sessions
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
India
Remote
Job Title: HCL Digital Experience Workflow Developer Location: Remote (Work From Home) Experience Required: 5 to 10 Years Role Overview We are looking for an HCL DX Workflow Developer to architect, configure, and implement complex content and business workflows in HCL Digital Experience. You will be responsible for building robust approval chains, content lifecycle processes, and operational automation across portal solutions. Key Responsibilities Design and configure HCL DX workflows for content publishing and governance Implement role-based approval processes with versioning and rollback Work with WCM to align workflow structures with content models Customize workflow steps using Java, JSP, and XML Integrate workflow tasks with notifications, API triggers, and business events Maintain workflow templates and documentation Collaborate with QA and content teams to test and refine workflow behavior Must-Have Skills Strong hands-on experience with HCL DX Workflow and WCM modules Java/J2EE and JSP development knowledge Understanding of workflow modeling, lifecycle states, and approvals Familiarity with WAS, LDAP, and security configurations Preferred Skills Experience with BPM platforms or process orchestration tools Understanding of WCAG and content compliance automation Soft Skills Analytical approach to process design Good communicator across technical and non-technical teams Organized and proactive in managing workflow versions Why Join Lead automation of enterprise content workflows Empower editorial teams with efficient publishing systems Enhance compliance and governance with scalable workflow solutions Skills: j2ee,workflow modeling,xml,hcl dx workflow,api,workflow,java,automation,jsp,lifecycle states,approvals,wcm modules,ldap,was
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
Azure Engineer (Remote | 7+ Years Description : We are looking for an experienced and proactive Azure Engineer to join our dynamic team on a 6-month remote contract. The ideal candidate will possess a deep understanding of Microsoft Azure services, cloud-based application architecture, and modern development practices using Python and C#. You will be instrumental in designing and developing robust, scalable, and secure Azure-based solutions that meet high-performance and availability standards. This role demands not just technical expertise, but also strong communication, ownership, and a passion for continuous learning. If you're a cloud technologist who thrives in a fast-paced, collaborative environment and has hands-on experience in designing Azure-based systems with automation and CI/CD practices, this opportunity is for you. Key Responsibilities Cloud Architecture & Design : Design, implement, and manage scalable cloud-native applications using Azure Functions, App Services, and other Azure offerings. Collaborate with solution architects and business stakeholders to develop robust cloud architectures aligned with enterprise standards and best practices. Translate business requirements into scalable and cost-effective Azure solutions, ensuring optimal performance, security, and & Integration : Develop and maintain microservices and APIs using Python and C#. Integrate services with Azure components like Azure Blob Storage, Azure Logic Apps, Azure Event Grid, Azure Service Bus, etc. Perform code reviews, refactor code for scalability, and follow clean code & CI/CD : Design and implement CI/CD pipelines using GitHub Actions (or similar tools) to enable continuous integration and continuous deployment across environments. Automate testing, build, deployment, and infrastructure provisioning where & Monitoring : Monitor the health, performance, and availability of applications using Azure Monitor, Application Insights, and custom logging. Troubleshoot and resolve issues across application, infrastructure, and data layers. Participate in regular release cycles and ensure deployments are executed smoothly with rollback strategies in & Documentation : Work closely with product managers, DevOps engineers, QA, and cross-functional teams to deliver high quality cloud solutions. Create and maintain clear and detailed technical documentation, including architecture diagrams, runbooks, and deployment guides. Provide knowledge transfer and training sessions when Qualifications & Experience : 7+ years of overall IT experience with a strong focus on cloud engineering and development. 3+ years of hands-on experience in Azure Architecture and Engineering, specifically with services like Azure Functions, Azure App Services, Azure Storage, and Logic Apps. 3+ years of software development experience using Python and C#, especially in cloud-native and serverless environments. 1+ year of experience in building and maintaining CI/CD pipelines, preferably using GitHub Actions. Strong understanding of RESTful APIs, serverless computing, and modern application development patterns. Experience working in Agile/Scrum environments with version control (Git) and task tracking (JIRA or similar). Solid knowledge of security best practices for cloud-based applications (authentication, authorization, data encryption). Strong problem-solving and analytical skills with a focus on performance tuning and debugging. Excellent written and verbal communication Qualifications : Microsoft Azure Certifications (AZ-204, AZ-305, AZ-400, or equivalent). Experience with Infrastructure-as-Code (IaC) tools such as ARM templates, Bicep, or Terraform. Exposure to containerized environments (Docker, Azure Container Instances, or AKS). Knowledge of cost estimation, budget optimization, and governance in Azure environments. Familiarity with test automation frameworks and DevOps culture. (ref:hirist.tech)
Posted 2 weeks ago
0 years
0 Lacs
Vijayawada, Andhra Pradesh, India
On-site
About Us JOB DESCRIPTION SBI Card is a leading pure-play credit card issuer in India, offering a wide range of credit cards to cater to diverse customer needs. We are constantly innovating to meet the evolving financial needs of our customers, empowering them with digital currency for seamless payment experience and indulge in rewarding benefits. At SBI Card, the motto 'Make Life Simple' inspires every initiative, ensuring that customer convenience is at the forefront of all that we do. We are committed to building an environment where people can thrive and create a better future for everyone. SBI Card is proud to be an equal opportunity & inclusive employer and welcome employees without any discrimination on the grounds of race, color, gender, religion, creed, disability, sexual orientation, gender identity, marital status, caste etc. SBI Card is committed to fostering an inclusive and diverse workplace where all employees are treated equally with dignity and respect which makes it a promising place to work. Join us to shape the future of digital payment in India and unlock your full potential. What’s In It For YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose Responsible for the management of all collections processes for allocated portfolio in the assigned CD/Area basis targets set for resolution, normalization, rollback/absolute recovery and ROR. Role Accountability Conduct timely allocation of portfolio to aligned vendors/NFTEs and conduct ongoing reviews to drive performance on the business targets through an extended team of field executives and callers Formulate tactical short term incentive plans for NFTEs to increase productivity and drive DRR Ensure various critical segments as defined by business are reviewed and performance is driven on them Ensure judicious use of hardship tools and adherence to the settlement waivers both on rate and value Conduct ongoing field visits on critical accounts and ensure proper documentation in Collect24 system of all field visits and telephone calls to customers Raise red flags in a timely manner basis deterioration in portfolio health indicators/frauds and raise timely alarms on critical incidents as per the compliance guidelines Ensure all guidelines mentioned in the SVCL are adhered to and that process hygiene is maintained at aligned agencies Ensure 100% data security using secured data transfer modes and data purging as per policy Ensure all customer complaints received are closed within time frame Conduct thorough due diligence while onboarding/offboarding/renewing a vendor and all necessary formalities are completed prior to allocating Ensure agencies raise invoices timely Monitor NFTE ACR CAPE as per the collection strategy Measures of Success Portfolio Coverage Resolution Rate Normalization/Roll back Rate Settlement waiver rate Absolute Recovery Rupee collected NFTE CAPE DRA certification of NFTEs Absolute Customer Complaints Absolute audit observations Process adherence as per MOU Technical Skills / Experience / Certifications Credit Card knowledge along with good understanding of Collection Processes Competencies critical to the role Analytical Ability Stakeholder Management Problem Solving Result Orientation Process Orientation Qualification Post-Graduate / Graduate in any discipline Preferred Industry FSI
Posted 2 weeks ago
2.0 years
0 Lacs
India
Remote
About ProCogia: We’re a diverse, close-knit team with a common pursuit of providing top-class, end-to-end data solutions for our clients. In return for your talent and expertise, you will be rewarded with a competitive salary, generous benefits, alongwith ample opportunity for personal development. ‘Growth mindset’ is something we seek in all our new hires and has helped drive much of our recent growth across North America. Our distinct approach is to push the limits and value derived from data. Working within ProCogia’s thriving environment will allow you to unleash your full career potential. The core of our culture is maintaining a high level of cultural equality throughout the company. Our diversity and differences allow us to create innovative and effective data solutions for our clients. Our Core Values: Trust, Growth, Innovation, Excellence, and Ownership Location: India (Remote) Time Zone: 12pm to 9pm IST Job Description: We are seeking a Senior MLOps Engineer with deep expertise in AWS CDK, MLOps, and Data Engineering tools to join a high-impact team focused on building reusable, scalable deployment pipelines for Amazon SageMaker workloads. This role combines hands-on engineering, automation, and infrastructure expertise with strong stakeholder engagement skills. You will work closely with Data Scientists, ML Engineers, and platform teams to accelerate ML productization using best-in-class DevOps practices. Key Responsibilities: Design, implement, and maintain reusable CI/CD pipelines for SageMaker-based ML workflows. Develop Infrastructure as Code using AWS CDK for scalable and secure cloud deployments. Build and manage integrations with AWS Lambda, Glue, Step Functions, and OpenTable formats (Apache Iceberg, Parquet, etc.). Support MLOps lifecycle: model packaging, deployment, versioning, monitoring, and rollback strategies. Use GitLab to manage repositories, pipelines, and infrastructure automation. Enable logging, monitoring, and cost-effective scaling of SageMaker instances and jobs. Collaborate closely with stakeholders across Data Science, Cloud Platform, and Product teams to gather requirements, communicate progress, and iterate on infrastructure designs. Ensure operational excellence through well-tested, reliable, and observable deployments. Required Skills: 2+ years of experience in MLOps, with 4+ years of experience in DevOps or Cloud Engineering, ideally with a focus on machine learning workloads. Hands-on experience with GitLab CI Pipelines, artifact scanning, vulnerability checks, and API management. Experience in Continuous Development, Continuous Integration (CI/CD), and Test-Driven Development (TDD). Experience in building microservices and API architectures using FastAPI, GraphQL, and Pydantic. Proficiency in Python v3.6 or higher and experience with Python frameworks such as Pytest. Strong experience with AWS CDK (TypeScript or Python) for IaC. Hands-on experience with Amazon SageMaker, including pipeline creation and model deployment. Solid command over AWS Lambda, AWS Glue, OpenTable formats (like Iceberg/Parquet), and event-driven architectures. Practical knowledge of MLOps best practices: reproducibility, metadata management, model drift, etc. Experience deploying production-grade data and ML systems. Comfortable working in a consulting/client-facing environment, with strong stakeholder management and communication skills Preferred Qualifications: Experience with feature stores, ML model registries, or custom SageMaker containers. Familiarity with data lineage, cost optimization, and cloud security best practices. Background in ML frameworks (TensorFlow, PyTorch, etc.). Education: Bachelor’s or master’s degree in any of the following: statistics, data science, computer science, or another mathematically intensive field. ProCogia is proud to be an equal-opportunity employer. We are committed to creating a diverse and inclusive workspace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Your Future Evolves Here Evolent partners with health plans and providers to achieve better outcomes for people with most complex and costly health conditions. Working across specialties and primary care, we seek to connect the pieces of fragmented health care system and ensure people get the same level of care and compassion we would want for our loved ones. Evolent employees enjoy work/life balance, the flexibility to suit their work to their lives, and autonomy they need to get things done. We believe that people do their best work when they're supported to live their best lives, and when they feel welcome to bring their whole selves to work. That's one reason why diversity and inclusion are core to our business. Join Evolent for the mission. Stay for the culture. What You’ll Be Doing: MLOps Engineer We are seeking a highly capable MLOps Engineer to join our growing AI/ML Team. You will bridge the gap between data science and operations, ensuring that machine learning models are efficiently tested, deployed, monitored, and maintained in production environments. You will work closely with data scientists, software engineers, infrastructure, and development teams to build scalable and reliable ML infrastructure. You will be instrumental in supporting clinical decision-making, operational efficiency, quality outcomes, and patient care. What You Will Be Doing: Model Deployment and Infrastructure Design, build, and maintain scalable, secure ML pipelines for model training, validation, deployment, and monitoring Automate deployment workflows using CI/CD pipelines and infrastructure-as-code tools Partner with Infrastructure Teams to manage (Azure) cloud-based ML infrastructure, ensuring compliance with InfoSec and AI policies Ensure applications run at peak efficiency Model Testing, Monitoring, and Validation Develop rigorous testing frameworks for ML models, including clinical validation, traditional model performance measures, population segmentation, and edge-case analysis Build monitoring systems to detect model drift, overfitting, data anomalies, and performance degradation in real-time Continuously analyze model performance metrics and operational logs to identify improvement opportunities Translate monitoring insights into actionable recommendations for data scientists to improve model precision, recall, fairness, and efficiency Model Transparency & Governance Maintain detailed audit trails, logs, and metadata for all model versions, training datasets, and configurations to ensure full traceability and support internal audits Ensure models meet transparency and explainability standards using tools like SHAP, LIME, or integrated explainability APIs. Collaborate with data scientists and clinical teams to ensure models are interpretable, actionable, and aligned with practical applications Support corporate Compliance and AI Governance policies Advocate for best practices in ML engineering, including reproducibility, version control, and ethical AI Develop product guides, model documentation, and model cards for internal and external stakeholders Required Qualifications : Bachelor’s Degree in Computer Science, Machine Learning, Data Science, or a related field 2+ years of experience in MLOps, DevOps, or ML engineering Proficiency in Python and ML frameworks such as Keras, PyTorch, Scikit-Learn, TensorFlow, and XGBoost Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD tools Familiarity with healthcare datasets and privacy regulations Strong analytical skills to interpret model performance data and identify optimization opportunities Proven ability to optimize application performance, including improving code efficiency, right-sizing infrastructure usage, and reducing system latency Experience implementing rollback strategies, including version control, rollback triggers, and safe deployment practices across lower and upper environments 2+ years of experience developing in a cloud environment (AWS, GCS, Azure) 2+ years of experience with Github, Github Actions, CI/CD, and source control 2+ years working within an Agile environment Preferred Qualifications: Experience with MLOps platforms like MLflow, TFX, or Kubeflow Healthcare experience, particularly using administrative and prior authorization data Proven experience with developing and deploying ML systems into production environments Experience working with Product, Engineering, Infrastructure, and Architecture teams Proficiency using Azure cloud-based services and infrastructure such as Azure MLOps Experience with feature flagging tools and strategies To comply with HIPAA security standards (45 C.F.R. sec. 164.308 (a) (3)), identity verification may be required as part of the application process. This is collected for compliance and security purposes and only reviewed if an applicant advances to the final interview state. Reasonable accommodations are available upon request. Technical Requirements: We require that all employees have the following technical capability at their home: High speed internet over 10 Mbps and, specifically for all call center employees, the ability to plug in directly to the home internet router. These at-home technical requirements are subject to change with any scheduled re-opening of our office locations. Evolent is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability status. If you need reasonable accommodation to access the information provided on this website, please contact recruiting@evolent.com for further assistance. The expected base salary/wage range for this position is $. This position is also eligible for a bonus component that would be dependent on pre-defined performance factors. As part of our total compensation package, Evolent is proud to offer comprehensive benefits (including health insurance benefits) to qualifying employees. All compensation determinations are based on the skills and experience required for the position and commensurate with experience of selected individuals, which may vary above and below the stated amounts.
Posted 2 weeks ago
5.0 years
3 - 8 Lacs
Hyderābād
On-site
Role: Senior DevOps Engineer Experience: 5+ years Location: Hyderabad / Coimbatore / Gurgaon Key Responsibilities: Maintain and evolve Terraform modules across core infrastructure services. Enhance GitHub Actions and GitLab CI pipelines with policy-as-code integrations. Automate Kubernetes secret management; transition from shared init containers to native mechanisms. Review, deploy, and manage Helm charts for service releases; own rollback reliability. Track and resolve environment drift; automate consistency checks across staging/production. Drive incident response tooling using Datadog and PagerDuty; actively participate in post-incident reviews. Assist with cost-optimization initiatives through ongoing resource sizing reviews. Implement and monitor SLA/SLO targets for critical services to ensure operational excellence. Skill Requirements: We encourage candidates with a strong foundational understanding and a willingness to grow-even if not all skills are met. Must-Have: Minimum 5 years of hands-on experience in DevOps or Platform Engineering roles. Deep expertise in Terraform, Terraform Cloud, and modular infrastructure design. Production experience managing Kubernetes clusters, preferably on Google Kubernetes Engine (GKE). Strong knowledge of CI/CD automation using GitHub Actions, ArgoCD, and Helm. Experience securing cloud-native environments using Google Secret Manager or HashiCorp Vault. Hands-on expertise in observability tooling (especially Datadog). Solid grasp of GCP networking, container workload security, and service configurations. Demonstrated ability to lead infrastructure initiatives and work cross-functionally on roadmap delivery. Desirable: Experience with GitOps and automated infrastructure policy enforcement. Familiarity with service mesh, workload identity, and multi-cluster deployments. Background in building DevOps functions or maturing legacy cloud/on-prem environments. Tools & Expectations: IaC - Terraform / Terraform Cloud – Maintain reusable infra components, handle drift/versioning across workspaces. CI/CD - GitHub / GitLab / GitHub Actions – Build secure pipelines, create reusable workflows, integrate scanning tools. App Packaging - Helm – Manage structured app packaging, configure upgrades and rollback strategies. Kubernetes - GKE – Operate core workloads, enforce RBAC, quotas, monitor pod lifecycles. Secrets - Google Secret Manager / Kubernetes Secrets – Automate sync, monitor access, enforce namespace boundaries. Observability - Datadog / PagerDuty – Implement alerting, support incident response and escalation mapping. Ingress & DNS - Cloudflare / DNS / WAF – Manage exposure policies and ingress routing via IaC. Security & Quality - Snyk / SonarQube / Wiz – Define thresholds, enforce secure and high quality deployments.
Posted 2 weeks ago
35.0 years
0 Lacs
Bengaluru
On-site
Company Description Eurofins Scientific is an international life sciences company, providing a unique range of analytical testing services to clients across multiple industries, to make life and the environment safer, healthier and more sustainable. From the food you eat to the medicines you rely on, Eurofins works with the biggest companies in the world to ensure the products they supply are safe, their ingredients are authentic and labelling is accurate. Eurofins is a global leader in food, environmental, pharmaceutical and cosmetic product testing and in agroscience CRO services. It is also one of the global independent market leaders in certain testing and laboratory services for genomics, discovery pharmacology, forensics, CDMO, advanced material sciences and in the support of clinical studies. In over just 35 years, Eurofins has grown from one laboratory in Nantes, France to 62,000 staff across a network of over 1,000 independent companies in 61 countries, operating 900 laboratories. Performing over 450 million tests every year, Eurofins offers a portfolio of over 200,000 analytical methods to evaluate the safety, identity, composition, authenticity, origin, traceability and purity of biological substances and products, as well as providing innovative clinical diagnostic testing services, as one of the leading global emerging players in specialised clinical diagnostics testing. Eurofins is one of the fastest growing listed European companies with a listing on the French stock exchange since 1997. Eurofins IT Solutions India Pvt Ltd (EITSI) is a fully owned subsidiary of Eurofins and functions as a Global Software Delivery Center exclusively catering to Eurofins Global IT business needs. The code shipped out of EITSI impacts the global network of Eurofins labs and services. The primary focus at EITSI is to develop the next generation LIMS (Lab Information Management system), Customer portals, e-commerce solutions, ERP/CRM system, Mobile Apps & other B2B platforms for various Eurofins Laboratories and businesses. Young and dynamic, we have a rich culture and we offer fulfilling careers. Job Description Senior Software Engineer Eurofins IT Solutions, Bengaluru, Karnataka, India With 54 facilities worldwide, Eurofins BioPharma Product Testing (BPT) is the largest network of bio/pharmaceutical GMP product testing laboratories providing comprehensive laboratory services for the world's largest pharmaceutical, biopharmaceutical, and medical device companies. BPT is enabled by global engineering teams working on next-generation applications and Laboratory Information Management Systems (LIMS). As Senior Software Engineer, you will be a crucial part of our delivery team, ensuring the Eurofins Electronic Notebook application’s (which is part of BPT’s application labsuite and one of the significant application) operations in production is adequately supported with quick turnaround time there by reducing the impact on Business due to application related requests, and issues. As a technology leader, BPT wants to give you the opportunity not just to accept new challenges and opportunities but to impress with your ingenuity, focus, attention to detail and collaboration with a global team of professionals. This role reports to a Deputy Manager. Required Experience and qualification Experience: 4 to 7 years of experience with developing end-to-end web applications using Microsoft stack of technologies. Strong working knowledge of Web application development using .NET Core (6/7/8), C#, Asp.net Core, MVC, WebAPI, Postman. Strong Working knowledge of Angular 7 or above, JavaScript, TypeScript, jQuery, HTML5 and CSS3. Good working knowledge of Cosmos DB, Elastic Search, Redis, Azure Functions, Azure DevOps, CI/CD, Event Driven Architecture, Domain Driven Architecture, Microservices, MSSQL – SQL etc. Experience with usage of Azure DevOps Familiar UI testing and Unit Testing (MS Test/ Jasmine/ MOQ/ NUnit/ Karma etc.) Good understanding of object-oriented programming (OOP) Able to provide technical recommendations and solve technical problems Should have working knowledge on Code review that includes, raising code review, resolve comment reviews, Closing code reviews. Should be aware of best practices in programming Should know how to troubleshoot complex issues, performance-related issues, how to write efficient code and query Working knowledge Authentication and Authorization [Plus OAuth2, OpenIDC etc.] (5+) Should have worked on at least one SOA (Service Oriented Architecture) project Should have worked in an AGILE practice methodology (preferably SCRUM) Personal Skills: Excellent analytical and problem solving skills Excellent verbal and written communication skills Ability to articulate and present different points-of-views on various topics related to project and otherwise. Successful teamwork experience and demonstrated leadership abilities are required. Eager to learn and continuously develop personal and technical capabilities. Responsibilities Advanced Troubleshooting & Issue Resolution Investigate and resolve complex issues (or escalated issues from Level 1 and Level 2 support). Analyze logs, application behavior, and system performance to identify root causes. Handle incidents involving application crashes, data inconsistencies, or integration failures. Root Cause Analysis (RCA) & Permanent Fixes Conduct detailed RCA for recurring or high-impact issues. Collaborate with development teams to implement long-term fixes or enhancements. Application Monitoring & Performance Tuning Use monitoring tools (e.g., app insights, Grafana, kibana etc.) to proactively detect anomalies. Optimize application performance and scalability. Deployment & Release Support Support production deployments, hotfixes, and rollback procedures. Validate post-deployment stability and performance. Compliance & Validation Ensure support activities align with GxP , and any other applicabl regulatory requirements. Maintain audit trails and documentation for all changes and incidents. Collaboration & Communication Work closely with DevOps, QA, and product teams to resolve issues. Communicate technical findings to non-technical stakeholders when needed. Knowledge Management Document solutions, workarounds, and known issues in a knowledge base. Provide guidance and training to L1/L2 teams. Additional Information Personal Skills: Excellent analytical and problem solving skills Excellent verbal and written communication skills Ability to articulate and present different points-of-views on various topics related to project and otherwise. Eager to learn and continuously develop personal and technical capabilities. Required Qualifications: MCA or Bachelors in Engineering, Computer Science or equivalent. PERFORMANCE APPRAISAL CRITERIA : Eurofins has a strong focus on Performance Management system. This includes quarterly calibrations, half-yearly reviews and annual reviews. The KPIs shall be set and may vary slightly between projects. These will be clearly communicated, documented during the first 30 days of your joining.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough