Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
2 - 6 Lacs
Noida
On-site
About Foxit Foxit is a global software company reshaping how the world interacts with documents. With over 700 million users worldwide, we offer cutting-edge PDF, collaboration, and e-signature solutions across desktop, mobile, and cloud platforms. As we expand our SaaS and cloud-native capabilities, we're seeking a technical leader who thrives in distributed environments and can bridge the gap between development and operations at global scale. Role Overview As a Senior Development Support Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications Technical Skills Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experience Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, cloud infrastructure, and global software deployment practices. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Language: Fluency in English; Mandarin Chinese is a strong plus. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture. #LI-Hybrid
Posted 4 hours ago
3.0 - 5.0 years
5 - 8 Lacs
Noida
On-site
Position: Devops Developer (NV35FCT RM 3313) Job Description: Design, deploy, and manage cloud infrastructure using AWS (EC2, VPC, ECS, Load Balancers, Auto Scaling Groups, EBS, EFS, FSx, S3, Transit Gateway, Lambda, API Gateway, CloudFront, WAF, IAM, CloudWatch, Route 53, AWS Transfer Family, Opensearch). Drive AWS cost optimization initiatives, including resource right-sizing, reserved instance planning, and cloud usage analysis. Build and manage containerized applications using Docker and ECS. Automate infrastructure provisioning and configuration using Terraform and Ansible. Develop scripts and tools in Python and Shell to automate operational tasks. Implement and maintain CI/CD pipelines using Jenkins, GitHub Actions, and Git. Manage and troubleshoot Linux systems (RHEL, Ubuntu, Amazon Linux) and Windows environments. Work with Active Directory (AD) for user and access management, integrating with cloud infrastrucutre. Monitor system performance, availability, and security using AWS native tools and best practices. Collaborate with cross-functional teams to support development, testing, and production environments. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Noida Experience: 3 - 5 years Notice period: 0-30 days
Posted 4 hours ago
10.0 years
8 - 10 Lacs
Lucknow
On-site
Job Title: Linux System Engineer (Tomcat/Apache/Patch Management) Location: Lucknow Work Mode: Onsite work from office GOVT project ( Cmmi Level 3 company) Experience: 10+ years Key Responsibilities: Administer, monitor, and troubleshoot Linux servers (RHEL/CentOS/Ubuntu) in production and staging environments. Configure, deploy, and manage Apache HTTP Server and Apache Tomcat applications. Perform regular patching, upgrades, and vulnerability remediation across Linux systems to maintain security compliance. Ensure availability, reliability, and performance of all server components. Maintain server hardening and compliance based on organization and industry standards. Automate routine tasks using shell scripting (Bash, Python preferred). Monitor system health using tools like Nagios, Zabbix, or similar. Collaborate with DevOps and Development teams for deployment and release planning. Support CI/CD pipelines and infrastructure provisioning (exposure to Jenkins, Ansible, Docker, Git, etc.). Document system configurations, procedures, and policies. Required Skills & Qualifications: 8 -10 years of hands-on experience in Linux Systems Administration. Strong expertise in Apache and Tomcat setup, tuning, and management. Experience with patch management tools (e.g., YUM, APT, Satellite, WSUS). Proficient in shell scripting (Bash, Python preferred). Familiarity with DevOps tools like Jenkins, Ansible, Git, Docker, etc. Experience in infrastructure monitoring and alerting tools. Strong troubleshooting and problem-solving skills. Understanding of basic networking and firewalls. Bachelor's degree in Computer Science, Information Technology, or related field. Preferred: Exposure to cloud platforms (AWS, Azure, GCP). Certification in Red Hat (RHCE/RHCSA) or Linux Foundation. Experience with infrastructure-as-code (Terraform, CloudFormation good to have). Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,000,000.00 per year Schedule: Day shift Work Location: In person Speak with the employer +91 9509902875
Posted 4 hours ago
3.0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. Team Coinbase is seeking a software engineer to join our India pod to drive the launch and growth of Coinbase in India. You will solve unique, large-scale, highly complex technical problems. You will help build the next generation of systems to make cryptocurrency accessible to everyone across multiple platforms (web, iOS, Android), operating real-time applications with high frequency and low latency updates, keeping the platform safe from fraud, enabling delightful experiences, and managing the most secure, containerized infrastructure running in the cloud. What you’ll be doing (i.e., job duties): Build high-performance services using Golang and gRPC, creating seamless integrations that elevate Coinbase's customer experience. Adopt, learn, and drive best practices in design techniques, coding, testing, documentation, monitoring, and alerting. Demonstrate a keen awareness of Coinbase’s platform, development practices, and various technical domains, and build upon them to efficiently deliver improvements across multiple teams. Add positive energy in every meeting and make your coworkers feel included in every interaction. Communicate across the company to both technical and non-technical leaders with ease. Deliver top-quality services in a tight timeframe by navigating seamlessly through uncertainties. Work with teams and teammates across multiple time zones. What we look for in you (i.e., job requirements): 3+ years of experience as a software engineer and 1+ years building backend services using Golang and gRPC. A self-starter capable of executing complex solutions with minimal guidance while ensuring efficiency and scalability. Proven experience integrating at least two third-party applications using Golang. Hands-on experience with AWS, Kubernetes, Terraform, Buildkite, or similar cloud infrastructure tools. Working knowledge of event-driven architectures (Kafka, MQ, etc.) and hands-on experience with SQL or NoSQL databases. Good understanding of gRPC, GraphQL, ETL pipelines, and modern development practices. Nice to haves: SaaS platform experience (Salesforce, Amazon Connect, Sprinklr). Experience with AWS, Kubernetes, Terraform, GitHub Actions, or similar tools. Familiarity with rate limiters, caching, metrics, logging, and debugging. Req ID - GCBE04IN Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here. Show more Show less
Posted 4 hours ago
5.0 years
8 - 9 Lacs
Calcutta
On-site
Description Summary: We are looking for a DevOps Engineer to join a globally distributed Development Department following an Agile Software Development and Release Methodology. This position works with many technologies such as Azure Cloud Services , Windows Administration, Azure Networking, Azure Firewall, Microservice infrastructure , Docker. The ideal candidate will be an energetic learner and enjoy sharing knowledge within the team via training sessions or documentation creation (preferably well versed in .md and .yml files). Role: Design, develop, maintain, and support high-quality in-house software build systems for Enterprise class software Candidate to participate in SRE practice working session and adopt and implement best practices in the respective fields Candidate will develop and maintain IaaC through terraform, Powershell and linux shell scripting Candidate will be responsible for defining the networking and firewall rules for achieving the business goals Define strategy for source code controlling through GitHub and Build and deploy pipeline through GitHub action. Understanding github auth model would be a plus. Working with containerization. (eg. Docker, AKS) Working with Azure Pass (e.g. Azure App service , Azure blob, cosmos DB , Azure Functions etc.) Ensure systems can accommodate growth in our delivery needs by understanding the project requirements during the SDLC process and monitor applications for high availability Define monitoring and alerting best practices based on Site Reliability Engineering Proficient in Azure log analytics and App – insight handling through Kql queries Analyzing application and server logs for troubleshooting C# based application(s) Should be well versed about RBAC model of Azure services Manage security certificates/keystore and to track and updating certificates based on the established process Availability via email, telephone, or any device that may be assigned in order to be part of a pager duty rotation which might extend over weekend as well. Qualifications Requirements: BE, BTech or MCA as educational qualification 5+ years’ experience in DevOps/SRE concepts Experience in Agile software development process Should possess good hand -on expertise on terraform, Powershell and linux shell scripting Should be hands-on GitHub and Github actions for building different pipelines. Understanding Github auth model would be a plus. Working with containerization. (eg. Docker, AKS) Should be well versed about RBAC model of Azure services Proficient in Azure log analytics and App – insight handling through Kql queries
Posted 4 hours ago
5.0 years
0 Lacs
Dharmapuri, Tamil Nadu, India
Remote
As a Senior Help Desk Technician at Lightcast, you will be a critical part of our IT support team, providing technical assistance and support to employees. This career-level role is designed for an experienced IT professional with a deep understanding of IT systems, excellent problem-solving skills, and a passion for delivering exceptional customer service. You will lead technical initiatives and mentor junior team members. Major Responsibilities: Technical Support: Resolve complex hardware, software, and system issues for end-users across platforms (Windows, macOS, Linux). Incident Management: Lead incident response, ensuring timely resolution and escalation when necessary. Knowledge Base: Contribute to and maintain documentation of known issues, best practices, and troubleshooting guides. Problem Ownership: Take initiative in resolving challenging technical problems, collaborating across IT teams as needed. Documentation: Accurately record all support interactions and resolutions in the helpdesk system. Security Compliance: Enforce and support company-wide IT security policies and compliance standards. Procurement & Licensing: Manage purchases of hardware/software, license renewals, and subscription tracking. Asset Management: Oversee inventory and lifecycle management of all IT assets. Skills/Abilities: 5+ years of hands-on IT support experience with a focus on troubleshooting and issue resolution. Strong knowledge of Windows OS and Microsoft Office; familiarity with macOS and Linux environments. Proven problem-solving abilities with strong attention to detail. Excellent communication and interpersonal skills. Experience with asset management and support tools (e.g., ticketing systems, remote support tools). Familiarity with cloud environments (AWS preferred) and infrastructure-as-code tools (e.g., Terraform, Pulumi). Knowledge of ITIL, ISO 27001, and accessibility standards (e.g., WCAG) is a plus. Proficiency in automation scripting (e.g., Python, PowerShell, JavaScript) is highly desirable. Education and Experience: Bachelor’s degree in IT, Computer Science, or a related field. IT certifications (e.g., CompTIA A+, Network+, Microsoft Certified) strongly preferred. 5+ years of experience in IT support, with a strong background in troubleshooting hardware and software issues. Lightcast is a global leader in labor market insights with headquarters in Moscow (ID) and Boston (MA) and offices in the United Kingdom, Europe, and India. We work with partners across six continents to help drive economic prosperity and mobility by providing the insights needed to build and develop our people, our institutions and companies, and our communities. Lightcast is proud to be an equal opportunity workplace and is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Lightcast has always been, and always will be, committed to diversity, equity and inclusion. We seek dynamic professionals from all backgrounds to join our teams, and we encourage our employees to bring their authentic, original, and best selves to work. Show more Show less
Posted 4 hours ago
1.0 years
0 - 0 Lacs
Indore
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹45,509.47 - ₹85,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 4 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity “A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers.” - VP, DevOps Engineering What You’ll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What We’re Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO? At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today – Big Data analytics. You’ll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: Credit Scoring — FICO® Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security — 4 billion payment cards globally are protected by FICO fraud systems. Lending — 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO’s solutions, placing us among the world’s top 100 software companies by revenue. We help many of the world’s largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people – just like you – who thrive on the collaboration and innovation that’s nurtured by a diverse and inclusive environment. We’ll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we’re proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don’t meet all stated qualifications. While our qualifications are clearly related to role success, each candidate’s profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at https://www.fico.com/en/privacy-policy Show more Show less
Posted 4 hours ago
14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Devops Manager Location: Ahmedabad/Hyderabad Exp: 14+ years Experience Required: 14+ years total experience, with 4–5 years in managerial roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement Show more Show less
Posted 4 hours ago
4.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled MuleSoft Developer with 5-7 years of hands-on experience in designing, developing, and managing APIs and integrations using MuleSoft Anypoint Platform . The ideal candidate should have a strong background in enterprise integration patterns, API-led connectivity, and cloud-native architecture. You will work closely with cross-functional teams to deliver scalable integration solutions that meet the organization's strategic goals and technical standards. Key Responsibilities: Design, build, and maintain APIs using MuleSoft Anypoint Platform Develop and deploy MuleSoft applications in on-prem, cloud, or hybrid environments Create RAML-based API specifications , perform data transformations using DataWeave , and configure third-party system connectors Collaborate with business analysts, architects, and QA teams to define and implement integration solutions Develop reusable components and frameworks to support scalable integration architecture Participate in code reviews , unit testing , and CI/CD pipeline integration Implement error handling , logging , and monitoring Troubleshoot and optimize deployed MuleSoft applications Maintain documentation and follow best practices and security policies Required Skills Required Skills & Qualifications: 5-7 years of total development experience, with at least 3+ years of hands-on MuleSoft experience Proficiency with Mule 4.x (and knowledge of Mule 3.x) Experience in designing and implementing REST/SOAP APIs , using RAML , JSON , XML Strong knowledge of DataWeave , MUnit , Maven , and Anypoint Studio Familiarity with OAuth 2.0, JWT , and API security best practices including threat protection, rate limiting, and encryption. Experience with Git , Jenkins , and CI/CD processes MuleSoft Certification (Developer Level 1 or higher) is highly desirable Excellent problem-solving abilities, attention to detail, and commitment to quality. Nice to Have: Experience with Salesforce , SAP , Workday , or other enterprise systems Familiarity with Kafka , RabbitMQ , or similar messaging systems Exposure to cloud platforms like AWS, Azure, or GCP Understanding of DevOps practices and infrastructure-as-code tools like Terraform
Posted 4 hours ago
13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Release Manager – Tools & Infrastructure Location: Hyderabad Experience Level: 13 years+ Department: Engineering / DevOps Reporting To: Head of DevOps / Engineering Director About the Role: We are seeking a hands-on Release Manager with strong DevOps and Infrastructure knowledge to oversee software release pipelines, tooling, and automation processes across distributed systems. The ideal candidate will be responsible for managing releases, ensuring environment readiness, coordinating with engineering, SRE, and QA teams, and driving tooling upgrades and ecosystem health. This is a critical role that bridges the gap between development and operations—ensuring timely, stable, and secure delivery of applications across environments. Key Responsibilities: Release & Environment Management: Manage release schedules, timelines, and coordination with multiple delivery streams. Own the setup and consistency of lower environments and production cutover readiness. Ensure effective version control, build validation, and artifact management across CI/CD pipelines. Oversee rollback strategies, patch releases, and post-deployment validations. Toolchain Ownership: Manage and maintain DevOps tools such as Jenkins, GitHub Actions, Bitbucket, SonarQube, JFrog, Argo CD, and Terraform. Govern container orchestration through Kubernetes and Helm. Maintain secrets and credential hygiene through HashiCorp Vault and related tools. Infrastructure & Automation: Work closely with Cloud, DevOps, and SRE teams to ensure automated and secure deployments. Leverage GCP (VPC, Compute Engine, GKE, Load Balancer, IAM, VPN, GCS) for scalable infrastructure. Ensure adherence to infrastructure-as-code (IaC) standards using Terraform and Helm charts. Monitoring, Logging & Stability: Implement and manage observability tools such as Prometheus, Grafana, ELK, and Datadog. Monitor release impact, track service health post-deployment, and lead incident response if required. Drive continuous improvement for faster and safer releases by implementing lessons from RCAs. Compliance, Documentation & Coordination: Use Jira, Confluence, and ServiceNow for release planning, documentation, and service tickets. Implement basic security standards (OWASP, WAF, GCP Cloud Armor) in release practices. Conduct cross-team coordination with QA, Dev, CloudOps, and Security for aligned delivery. Show more Show less
Posted 4 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 4 hours ago
8.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less
Posted 4 hours ago
8.0 years
0 Lacs
India
Remote
Position: Fullstack Developer (AI+React) Experience: 8+ years Work Mode: Remote Shift timings: 8 am-5 pm Notice Period: Immediate Experience with AI/ML: 3+ years Tech Stack React, Next.js, TypeScript, FastAPI, Python, PostgreSQL, MongoDB, GPT-4, LangChain, Terraform, AWS Must-Haves Expert-level proficiency in React, TypeScript, and Next.js, including SSR and SSG. Strong backend experience using Python and FastAPI, with a focus on API design, database modeling (PostgreSQL, MongoDB), and secure authentication protocols (e.g., JWT, OAuth2). Hands-on experience with prompt engineering and deploying large language models (LLMs) such as GPT-4, LLaMA, or open-source equivalents. Familiarity with ML model serving frameworks (e.g., TorchServe, BentoML) and container orchestration tools (Docker, Kubernetes). In-depth knowledge of the AI/ML development lifecycle, from data preprocessing to model monitoring and retraining. Strong understanding of Agile/Scrum, including writing user stories, defining acceptance criteria, and conducting code reviews. Proven ability to translate financial domain requirements (accounting, bookkeeping, reporting) into scalable product features. Experience integrating with financial services APIs (e.g., accounting platforms, payment gateways, banking feeds) is a plus. Excellent written and verbal communication skills in English, with the ability to explain complex ideas to nontechnical audiences. Skills: next.js,api design,container orchestration tools,financial services api integration,agile,scrum,communication skills,postgresql,prompt engineering,terraform,mongodb,gpt-4,ai/ml,react,aws,ml,ml model serving frameworks,database modeling,typescript,python,fastapi,secure authentication techniques,langchain Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Bachelors degree in Computer Science, Engineering, or related field. 10 plus years of professional experience in Java development. 3+ years of experience designing and developing solutions in AWS cloud environments. Strong expertise in Java 8+, Spring Boot, RESTful API design, and microservices architecture. Hands-on experience with key AWS services: Lambda, API Gateway, S3, RDS, DynamoDB, ECS, SNS/SQS, CloudWatch. Solid understanding of infrastructure-as-code (IaC) tools like Terraform, AWS CloudFormation, or CDK. experience with Agile/Scrum, version control (Git), and CI/CD pipelines. Strong communication and leadership skills, including leading distributed development teams. Lead end-to-end technical delivery of cloud-native applications built on Java and AWS. Design and architect secure, scalable, and resilient systems using microservices and serverless patterns. Guide the team in implementing solutions using Java (Spring Boot, REST APIs) and AWS services (e.g., Lambda, API Gateway, DynamoDB, S3, ECS, RDS, SNS/SQS). Participate in code reviews, ensure high code quality, and enforce clean architecture and design principles. Collaborate with DevOps engineers to define CI/CD pipelines using tools such as Jenkins, GitLab, or AWS CodePipeline. Mentor and coach developers on both technical skills and Agile best practices. Translate business and technical requirements into system designs and implementation plans. Ensure performance tuning, scalability, monitoring, and observability of deployed services. Stay current with new AWS offerings and Java development trends to drive innovation. Show more Show less
Posted 5 hours ago
8.0 years
0 Lacs
Andhra Pradesh, India
On-site
AWS Devops Mandatory skills VMware AWS Infra Ec2 Containerizations, Devops Jenkins Kubernetes Terraform Secondary skills Python Lambda Step Functions Design and implement cloud infrastructure solutions for cloud environments. Evaluate and recommend cloud infrastructure tools and services. Manage infrastructure performance, monitoring, reliability and scalability. Technical Skills: Overall experience of 8+ years with 5+ years of Infrastucture Architecture experience Cloud Platforms Proficient in AWS along with other CSP Good understanding of cloud networking services VPC Load Balancing DNS etc Infrastructure as Code IaC Proficient with hands on experience on Terraform or AWS CloudFormation for provisioning Security Strong knowledge of cloud security fundamentals IAM security groups firewall rules Automation Familiarity proficient with hands on experience on CI CD pipelines containerization Kubernetes Docker and configuration management tools eg Chef Puppet Monitoring Performance Experience with cloud monitoring and logging tools CloudWatch Azure Monitor Stackdriver Disaster Recovery Knowledge of backup replication and recovery strategies in cloud environments Support cloud migration efforts and recommend strategies for optimization Collaborate with DevOps and security teams to integrate best practices Evaluate implement and streamline DevOps practices Supervising Examining and Handling technical operations Show more Show less
Posted 5 hours ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Azure Devops Mandatory skills Azure Infra, Azure Devops, Kubernetes, IaC (Terraform). Secondary skills: Python, Azure PaaS Services, Azure Functions, Azure App Services, Azure Active Directory Proficiency in Azure cloud services, including virtual machines, containers, networking, and databases. Experience in designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines using Azure DevOps, Jenkins, or similar tools.Knowledge of Infrastructure as Code tools like Terraform, ARM templates, or Azure Bicep for automating infrastructure deployment. Expertise in version control systems, particularly Git, for managing and tracking code changes. Strong PowerShell, Bash, or Python scripting skills for automating tasks and processes. Experience with monitoring and logging tools like Azure Monitor, Log Analytics, and Application Insights for performance and reliability management. Understanding security best practices, including role-based access control (RBAC), Azure Policy, and managing secrets with tools like Azure Key Vault. Ability to collaborate effectively with development, operations, and security teams, with strong communication skills to drive DevOps culture. Knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes on Azure Kubernetes Service (AKS). Strong problem-solving abilities to troubleshoot and resolve complex technical issues related to DevOps processes. Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced DevOps Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining secure cloud infrastructure using cloud-based technologies, including Oracle and Microsoft platforms. You will build and support scalable and reliable application systems and automate deployments. Additionally, you will integrate various systems and technologies using REST APIs and automate the software development and deployment lifecycle. Leveraging automation and monitoring tools, along with AI-powered solutions, you will ensure the smooth operation of our cloud-based systems. Key Areas of Responsibility Implement automation to control and orchestrate cloud workloads, managing the build and deployment cycles for each deployed solution via CI/CD. Utilize a wide variety of cloud-based services, including containers, App Services, API , and SaaS-oriented integration. GitHub and CI/CD tools (e.g., Jenkins, GitHub Actions, Maven/ANT). Create and maintain build and deployment configurations using Helm and Yaml. Manage the software change control process, including Quality Control and SCM audits, enforcing adherence to all change control and code management processes. Continuously manage and maintain releases, clear understanding of release management process Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud-based solutions. Problem-solving, teamwork, and communication to emphasize the collaborative nature of the role. Perform builds and environment configurations. Required Skills and Experience 5+ years of overall experience, Expertise in automating the software development and deployment lifecycle using Jenkins, Github Actions, SAST, DAST, Compliances, and Oracle ERP DevOps tools. Proficient with Unix Shell Scripting, SQL*Plus, PL/SQL, and Oracle database objects. Understanding of branching models is important. Experience in creating cloud resources using automation tools. Strong hands-on experience with Terraform and Azure Infrastructure as Code (IaC). Hands-on experience in GitOps, Flux CD/Argo CD, Jenkins, Groovy. Building and deploying Java and .NET applications, Liquibase database deployments. Proficient with Azure cloud concepts, creating Azure Container Apps, Kubernetes, Load balancers, Az CLI, Kubectl, Observability, APM, App Performance reivews. Azure AZ-104 or AZ-400 Certification is a plus Show more Show less
Posted 5 hours ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Site Reliability Engineering (SRE) Intern – Azure Focus Location : Viman Nagar, Pune Duration : 1 Year (Internship with potential for extension or full-time opportunity) About the Role : We are looking for enthusiastic and motivated SRE Interns/Freshers with a keen interest in Cloud Computing, DevOps practices, and Azure platform. This internship offers hands-on experience in cloud infrastructure, automation, and system reliability engineering, giving you exposure to real-world environments and tools used in production systems. Key Responsibilities : • Support setup and management of Azure cloud resources (VMs, storage, networking) • Assist in monitoring and troubleshooting infrastructure and application issues • Work with version control systems like Git and participate in CI/CD processes • Contribute to automation scripts and infrastructure as code using tools like Terraform or Bicep • Collaborate with mentors and team members to understand system reliability and operational practices • Document procedures, issues, and solutions clearly Ideal Candidate Profile : ✅ Technical Skills : • Understanding of operating systems (Linux or Windows) • Basic networking knowledge (DNS, HTTP, TCP/IP) • Familiarity with cloud computing concepts (IaaS, PaaS, SaaS) • Exposure to Azure services like Virtual Machines, Resource Groups, Azure Storage, etc. • Basic command-line and scripting experience (Bash or PowerShell) • Familiarity with Git and basic CI/CD concepts ✅ Preferred (Nice to Have) : • Exposure to Azure CLI, Azure DevOps pipelines, or ARM templates • Basic knowledge of monitoring tools like Azure Monitor or Log Analytics • Hands-on experience with infrastructure-as-code tools (Terraform, Bicep) ✅ Soft Skills : • Strong problem-solving and analytical skills • Effective written and verbal communication • Willingness to learn, take initiative, and adapt in a fast-paced environment • Team-oriented with a collaborative mindset Educational Qualification : • Recently completed a degree in Computer Science, Information Technology, or a related field • Certifications like Microsoft Azure Fundamentals (AZ-900) are a plus What You’ll Gain : • Real-world experience working with cloud infrastructure and DevOps tools • Exposure to Site Reliability Engineering practices • Mentorship from experienced professionals • Opportunity to work on a capstone project • A potential pathway to full-time employment based on performance Show more Show less
Posted 5 hours ago
89.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 5 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins . Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
A Snapshot of Your Day As a Senior DevOps Engineer in our Industrial IoT team, you'll be at the intersection of cloud innovation and operational technology. Your day involves architecting and maintaining the AWS infrastructure that powers our IoT edge devices deployed across manufacturing facilities worldwide. You'll develop and optimize complex CI/CD pipelines that enable seamless deployment from cloud to edge, troubleshoot complex infrastructure issues, and collaborate with cross-functional teams to enhance our Industrial IoT capabilities. Your expertise will ensure our Linux-based edge devices receive secure updates, maintain reliable connections to the AWS cloud, and operate efficiently in industrial environments. We offer flexible work hours, and the choice of working from office or home. Join us on this exciting journey and play an important role in defining the future of Industrial IoT at Siemens Energy. What You Bring Engineering graduate with 5+ years of DevOps experience Python skills with object-oriented programming concepts Experience with AWS core services (IAM, Lambda, S3, API, Systems Manager, etc.) Strong Infrastructure as code experience (CloudFormation, CDK, or Terraform required) Experience building complex CI/CD pipelines and Git version control proficiency Linux skills including system administration, hardening, and shell scripting Experience in technical support, debugging, and release management DevOps culture and agile/Scrum methodology knowledge Must provide own code samples via GitLab / GitHub Industrial IoT and manufacturing exposure a plus Fluency in English How You’ll Make An Impact Develop and optimize complex GitLab CI/CD pipelines Design and maintain AWS infrastructure using CloudFormation/CDK Collaborate with internal and external development teams Lead troubleshooting for complex infrastructure issues Ensure compliance with security standards and best practices Support internal developers and manage production releases Monitor and improve security and performance Organization of production releases About The Team You will be part of a dedicated team focused on Industrial IoT for Siemens Energy internal operations. This team builds and manages the AWS cloud infrastructure powering our global OT services and edge devices. We operate in an agile environment, balancing technical innovation with the practical demands of industrial systems. Our work directly impacts manufacturing efficiency and operational reliability across Siemens Energy facilities worldwide. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. We meet the growing energy demand across 90+ countries while ensuring our climate is protected. With ~100,000 dedicated employees, we not only generate electricity for over 16% of the global community, but we’re also using our technology to help protect people and the environment. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity, we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits All employees are automatically covered under the Medical Insurance. Company paid considerable Family floater cover covering employee, spouse and 2 dependent children up to 25 years of age. Siemens Energy provides an option to opt for Meal Card to all its employees which will be as per the terms and conditions prescribed in the company policy. – As a part of CTC, tax saving measure Flexi Pay empowers employees with the choice to customize the amount in some of the salary components within a defined range thereby optimizing the tax benefits. Accordingly, each employee is empowered to decide on the best Possible net income out of the same fixed individual base pay on a monthly basis. https://jobs.siemens-energy.com/jobs Show more Show less
Posted 5 hours ago
6.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
TCS Hiring for Azure Cloud Engineer_PAN India Experience: 6 to 12 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Engineer_PAN India Required Technical Skill Set: Azure Cloud Engineer with Azure VMs, Blob Storage, Azure SQL, Azure Functions, AKS, etc. Desired Competencies (Technical/Behavioral Competency): Must-Have • Design and deploy scalable, highly available, and fault-tolerant systems on Azure. Proven experience with Microsoft Azure services (Compute, Storage, Networking, Security). • Strong understanding of networking concepts (DNS, VPN, VNet, NSG, Load Balancers). • Manage and monitor cloud infrastructure using Azure Monitor, Log Analytics, and other tools. • Implement and manage virtual networks, storage accounts, and Azure Active Directory. • Hands-on experience with Infrastructure as Code (IaC) tools like ARM, Terraform. Experience with scripting languages (PowerShell, Bash, or Python). • Ensure security best practices and compliance standards are followed. • Troubleshoot and resolve issues related to cloud infrastructure and services. • Experience in DevOps to support CI/CD pipelines and containerized applications (AKS, Docker). • Optimize cloud costs and performance. • Familiarity with Azure DevOps, GitHub Actions, or other CI/CD tools. • Experience in identity and access management (IAM), RBAC, and Azure AD. Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the Azure CLI and APIs • Experience in migration using Azure migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Role descriptions / Expectations from the Role 1 Ability to understand and articulate the different functions within AZURE and implement appropriate solution, HLD, LLD around it. 2 Ability to identify and gather requirements to define a solution to be built and operated on AZURE, perform high level and low-level design for the AZURE platform. 3 Capabilities to provide AZURE operations and deployment guidance and best practices throughout the lifecycle of a project. 4 Understanding the significance of the different metrics under the monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds. 5 Knowledge of automation to reduce the number of incidents or repetitive incidents are preferred 6 AZURE Engineer will be responsible for provisioning the services as per the design. Kind Regards, Priyankha M Show more Show less
Posted 5 hours ago
8.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. Numerator provides unparalleled consumer insights at a massive scale. Our technology harnesses data through the application of gamified mobile apps and sophisticated web crawling technology to deliver an unmatched view of consumer shopping and purchase experience. Numerator is looking for a passionate Senior Engineering to join our Receipt Processing Team. As part of our Receipt Processing Team tools team, you will be responsible for our receipt transcription system that has processed over a billion receipts and adds millions every week. This is a high growth and impactful role that will give you tons of opportunity to drive decisions for projects from inception through production. If you are seeking an environment where you get to do meaningful work with other great engineers, then we want to hear from you! What You'll Bring to Numerator What You’ll get to do Help to create the design, architecture, and execution of everything from backend APIs to data processing and databases. Make decisions about code design, architecture, and refactoring to balance technical debt against delivering functionality. Work with stakeholders to identify project risks and recommend mitigating solutions. Collaborate with our cross-functional team to build powerful and easy-to-use products. Architectural designs and decisions, to improve the availability of the system Maintaining the system in general, on-call bug-fixing for mission critical issues Example Projects Build out and expand the framework for the rules engine transcription of our receipts data to in leverage the inherent structure and spacing of the tabular data in a receipt. Build out a data QA process to approve the output of both our machine learning algorithms, and our hundreds of data associates attributing products. Refactor our backend to optimize for scale as the number or receipts we need to process continues to grow. Our Tech Stack Web: HTML, Javascript, CSS, Angular. Backend: Python, Django, Aurora Mysql, Redis. Distributed Computing: Celery, Airflow, Azkaban, RabbitMQ Data Warehouse: Snowflake Infrastructure: AWS EC2, Kubernetes, Docker, Helm, Terraform Requirements Have 8+ years of experience in a backend role. Programming experience in Python, or another object-oriented language. An eagerness to learn new things, and improve upon existing skills, abilities, and practices. Familiarity with web technology, such as HTTP, JSON, HTML, and JavaScript UIs. Experience with databases, SQL or NoSQL. Knowledge in an Agile software development environment, Experience with version control systems (Git, Subversion, etc..). Have a real passion for clean code and finding elegant solutions to problems. Knowledge and abilities in python and cloud-based technologies. Motivation to participate in ongoing learning and growth through pair programming, test-driven development, code reviews, and application of new technologies and best practices. You look ahead to identify opportunities and foster a culture of innovation. B.E/B.Tech in Computer Science or a related field, or equivalent work experience. Knowledge of Kubernetes and Docker development Nice to haves Previous experience leading an engineering team. Experience in UI frameworks React, Angular. Experience with REST services and API design. Knowledge of TCP/IP sockets Programming experience on Unix based infrastructure. Knowledge of cloud-based systems (EC2, Rackspace, etc..). Expertise with big data, analytics, and personalization. Start-up or CPG industry experience. Show more Show less
Posted 5 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description CS Optima offers cloud-native solutions and partners with customer teams to build and operate applications efficiently in the cloud. We focus on the areas like new cloud-based solution development, application modernization, and building scalable platforms with AWS serverless, AI/ML, Gen AI, etc. Role Description This is a full-time on-site role located in Chennai for a Python Sr Programmer with AWS Cloud Experience. The Sr Programmer will be responsible for back-end web development, software development, programming, and object-oriented programming (OOP). Qualifications Python based API development experience Total years of experience should be 4-8 yrs Programming and Object-Oriented Programming (OOP) skills 1- 2 yrs of experience with AWS services like Terraform, Lambda function, AppSync, DynamoDB Excellent problem-solving abilities Strong communication and collaboration skills Experience in working enterprise level application development Bachelor's degree in Computer Science or related field Agile development experience Show more Show less
Posted 5 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2