Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
16.0 years
0 Lacs
India
On-site
About TechBlocks: TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Overview: Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Experience Required: 10+ years in DevOps engineering roles with proven expertise in CI/CD, infrastructure automation, AWS & Ansible. Responsibilities: Design, develop, and maintain Ansible Playbooks, roles, and inventory structures to automate complex infrastructure and application deployment workflows. Leverage Ansible Tower / AWX for centralized automation, RBAC (Role-Based Access Control), and job scheduling. Develop and maintain infrastructure automation using tools like Terraform and CloudFormation (for AWS environments). Manage and harden Linux-based systems (RHEL, CentOS, Ubuntu) ensuring high availability and security. Write efficient and reusable scripts in Python and Bash for system automation, monitoring, and reporting. Architect, deploy, and manage solutions across cloud platforms such as: AWS (EC2, VPC, IAM, S3, Lambda, CloudWatch, etc.) Integrate infrastructure automation (Ansible/Terraform) into pipeline workflows for continuous delivery of infrastructure changes. Orchestrate and deploy containers at scale using Kubernetes, including Helm chart management and K8s resource optimization. Set up and manage monitoring tools such as Prometheus, Grafana, ELK Stack, or other observability platforms.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
India
On-site
We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About The Role At Confluent, we live by a core value: Earn Our Customers' Love. As a Customer Solutions Consultant (CSC), you'll be a vital partner to our customers, empowering them to unlock the full value of their investment in our cutting-edge streaming data platform. In this dynamic role, you'll manage a diverse portfolio of customers, from Mid-Market to Enterprise and Strategic accounts. You'll work hand-in-hand with teams across Customer Solutions, Sales, Product, and Engineering, driving activities that ensure technical health, product adoption, and demonstrable value realization. Your efforts will be critical in fostering customer growth and retention. This position is perfect for individuals who are passionate about technology, eager to solve complex business problems, and possess a strong customer-centric mindset. You'll deepen your technical expertise in Kafka, Flink, and Confluent IP, while leveraging your customer-facing skills in a high-growth environment. You'll collaborate with some of the world's most renowned companies, helping them achieve mission-critical outcomes. We're looking for curious, motivated professionals ready to accelerate their development and make a significant impact from day one. What You Will Do Serve as a Trusted Technical Advisor: Build strong relationships with customers, becoming their go-to technical expert across your diverse portfolio. Proactively support customers through the technical lifecycle, including architecture planning, cluster and security design, monitoring, and automation. Lead Post-Sale Engagements: Act as a primary technical contact post-sale, coordinating with internal teams to ensure successful outcomes. Guide customers in maturing their data streaming utilization and optimizing their usage through regular technical health reviews. Introduce new product capabilities via roadmap review sessions and plan for their adoption. Drive Customer Success: Collaborate with support engineers to troubleshoot issues, identify root causes, and provide actionable insights for customers to take corrective action. Pinpoint technical objections and strategize to overcome adoption blockers. Identify potential risks and execute mitigation plans with internal stakeholders to prevent customer churn. Develop Deep Technical Expertise: Cultivate an in-depth understanding of Confluent's technologies and the intricacies of building streaming applications to resolve complex customer challenges. What You Will Bring Experience & Technical Aptitude: 5-8 years of experience in Solutions Engineering, Software Development, Data Engineering, Data Architecture, Cloud Architecture, or similar roles. You'll have a passion for solving complex technical problems with a strong understanding of modern infrastructure and streaming technologies, thriving as a self-starter in a fast-paced environment. Exceptional Communication: Excellent interpersonal and communication skills, with the ability to concisely explain complex issues and solutions to a variety of technical and non-technical personas. Customer Portfolio Management: Demonstrated ability to manage a large customer portfolio, paying strict attention to detail and delivering results across multiple initiatives like driving expansion, customer satisfaction, feature adoption, and retention. Cloud & Networking Expertise: Experience with cloud and on-premises architectures, along with a solid understanding of cloud networking and security technologies (e.g., VPC, Private Link, Private Service Connect, TLS/SSL, SASL, Kerberos). Distributed Systems Knowledge: Experience with distributed systems and infrastructure software such as databases, message queues, Kubernetes, serverless technologies, and/or Big Data products, including developing ETL applications. Development & Automation: Experience with software development tools, configuration management, infrastructure automation, and CI/CD tooling (e.g., Terraform, Ansible) is a plus. Programming Flexibility: Development language agnostic, with proficiency in Java, Python, or SQL. Customer-Centric Mindset: A strong customer-centric approach, understanding the customer journey framework and the ability to prescribe ideal outcomes, guiding customers along their path to success. Project Management Skills: Proven project management experience for effective internal stakeholder management, coordinating with various teams to ensure overall account success. Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Job Summary: Senior Engineer - Sports Platform Engineering Services Contract Terms: Permanent THE TEAM You will be working as part of a cross functional platform team to deliver infrastructure, platform and database services for the Sports ticketing product(s) in our international markets. partnering with product and software engineering teams to ensure alignment on achieving business goals for the international Sports product suite. THE JOB Ticketmaster Sport is part of Live Nation Entertainment which is the world’s leading live entertainment company comprised of global market leaders: Ticketmaster, Live Nation Concerts, LN Media and Artist Nation Management. You will consult on and help to implement solutions as part of a Product Delivery team. Enabling teams to deliver software faster by creation of tooling and automation, providing operational support for a range of products, including the delivery of ticketing and associated solutions for major tournaments and high-profile sports clubs and events. Because our business is online 24/7, you will be required to work out of hours and provide on-call duty on a rota basis. WHAT YOU WILL BE DOING Supporting and maintaining a hybrid Windows and Linux infrastructure , ensuring stability, performance, and operational efficiency. Partnering with product engineering teams to bring new features and platform components into production, contributing to a seamless deployment pipeline. Automating recurring tasks, deployments, and testing workflows using infrastructure-as-code and scripting tools to improve consistency and speed. Designing and implementing highly available, fault-tolerant systems that meet performance and scalability demands. Planning, organizing, and clearly communicating project progress and outcomes to stakeholders and team members. Driving continuous improvements in system architecture, security , and operational processes to meet PCI compliance and internal standards. Diagnosing and resolving complex issues across the full technology stack, from infrastructure to application level. Conducting regular infrastructure audits, identifying gaps, and maintaining a well-prioritized backlog of improvements and enhancements. WHAT YOU NEED TO KNOW (or TECHNICAL SKILLS) Solid 5+ years of hands-on experience working with public cloud platforms , particularly AWS , including infrastructure provisioning and cloud-native services. Proficient in scripting languages such as PowerShell, Bash, and Python , with a strong focus on automation and operational efficiency. Skilled in using configuration management tools like Ansible, Chef, or Octopus Deploy to streamline and standardize infrastructure deployments. Strong knowledge of Windows and Linux server configuration and administration , with the ability to support hybrid environments. Experience managing and maintaining Active Directory , including integration with cloud services and identity platforms. Familiar with network storage technologies , such as NetApp and Amazon S3 , including setup, management, and optimization. Proven experience with virtualization platforms , including VMware, Hyper-V, and XEN , supporting scalable, resilient systems. Practical experience provisioning infrastructure using Terraform or CloudFormation , following infrastructure-as-code principles. Working knowledge of secrets and service discovery tools , such as Vault and Consul , ensuring secure and reliable platform operations. Comfortable working with and integrating RESTful APIs , enabling automation, observability, and platform extensibility. YOU (BEHAVIOURAL SKILLS) Applies advanced troubleshooting skills to proactively resolve issues and minimize operational disruption. Demonstrates strong analytical thinking and a solution-oriented mindset, regularly identifying opportunities for improvement and innovation. Uses sound judgment to select appropriate methods, tools, and approaches for solving complex technical challenges. Actively contributes to the design and architecture of systems, ensuring alignment with business goals and technical strategy. Regularly reviews performance, security, and quality metrics , identifying trends and taking action to maintain operational excellence. Embraces new ideas with an open and adaptive mindset , actively seeking opportunities to experiment, learn, and grow. Shares and applies proven solutions and best practices from across teams to drive consistency and efficiency across the organization. Consistently delivers work to a high standard , demonstrating ownership, precision, and a commitment to continuous improvement. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities. #LI-AK
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description Job Location: Chennai or Mumbai or Gurgaon High-Value Professional Experience And Skills Cloud Migrations & Architecture Proven leadership in designing and executing application/infrastructure migration projects (on-prem to cloud). Expert in cloud architecture design and implementation to solve complex business problems and achieve team goals. Strong familiarity with Microsoft Azure; AWS and GCP experience also valued. DevOps & Automation Experience building and managing automated CI/CD pipelines (Preferred: GitHub Actions; Also considered: Jenkins, Argo CD). Proficient in Infrastructure-as-Code (IaC) (Preferred: Terraform; Also considered: Ansible, Puppet, ARM templates). Expertise in managing containerized workloads (Preferred: AKS & Helm; Also considered: EKS, other Kubernetes distributions, Docker, JFrog). Skilled in serverless computing (e.g., Logic Apps, Azure/AWS Functions, WebJobs, Lambda). Monitoring, Security & Analytics Proficient in logging and monitoring tools (e.g., Fluentd, Prometheus, Grafana, Azure Monitor, Log Analytics). Strong understanding of cloud-native network security (e.g., Azure Policy, AD/RBAC, ACLs, NSG rules, private endpoints). Exposure to big data analytics platforms (e.g., Databricks, Synapse). Other Desirable Professional Experience And Skills Strong technologist with the ability to advise on cloud best practices. Skilled in multi-component system integration and troubleshooting. Budgeting and cost optimization experience in cloud environments. Experience in performance analysis and application tuning. Expertise in secrets management (Preferred: HashiCorp Vault; Also: Azure Key Vault, AWS Secrets Manager). Familiarity with Kubernetes service meshes (Preferred: Linkerd; Also: Istio, Traefik mesh). Scripting and coding proficiency in various environments: (e.g., Bash/Sh, PowerShell, Python, Java). Familiar with tools like Jira, Confluence, Azure Storage Explorer, MySQL Workbench, Maven. Basic Professional Experience And Skills Solid understanding of SDLC, change control processes, and related procedures. Hands-on experience with source control and code repository tools (e.g., Git/GitHub/GitLab, VS Code, SVN). Ability to articulate and present technical solutions to both technical and non-technical audiences. Required Education And Professional Experience 5+ years of overall professional IT experience. 2+ years of hands-on experience in DevOps/Site Reliability Engineering (SRE) roles on major cloud platforms. Bachelor’s degree in Engineering, Computer Science, or IT; advanced degrees preferred. Industry certifications in Cloud Architecture or Development (Preferred: Azure; Also: AWS, GCP). Skills Any Cloud experience (Azure/AWS) Terraform, Kubernetes, Any CI/CD tool Security and code quality tools: WiZ, Snyk, Qualys, Mend, Checkmarx, Dependabot, etc. (Experience with any of these tools) Ashwini P, Recruitment Manager TEKSALT|A Pinch of Us Makes All the Difference [An E-Verified & WOSB Certified Company] Healthcare | Pharma | Manufacturing | Insurance | Financial | Retail Mobile: +91- 9945165022 | email: ashwini.p@teksalt.com www.teksalt.com Secret management: Hashicorp vault / Akeyless
Posted 1 week ago
2.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Department: Information Technology Location: Indore Job Type: Full-Time Job Summary: We are seeking a skilled IT Automation Specialist to streamline and optimize our IT operations through effective automation solutions. The ideal candidate will be responsible for designing, implementing, and maintaining automation scripts, tools, and frameworks that enhance productivity, reduce manual intervention, and support scalability across IT systems and services. Key Responsibilities: Analyze current IT workflows, processes, and systems to identify opportunities for automation. Develop, test, and deploy automation scripts and tools using technologies like PowerShell, Python, Ansible, Terraform, or Bash. Implement and manage CI/CD pipelines to support DevOps practices. Automate infrastructure provisioning, software deployment, configuration management, and monitoring processes. Collaborate with cross-functional teams (DevOps, Network, Security, and Development) to understand automation needs. Maintain and update documentation related to automated processes and workflows. Monitor system performance and provide proactive solutions for system improvement. Ensure all automation complies with security standards, change management, and best practices. Troubleshoot issues related to automated systems and resolve in a timely manner. Stay current with industry trends and emerging technologies in IT automation and DevOps. Qualifications and Skills required: Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in IT operations or system administration with a focus on automation. Proficiency in scripting languages: PowerShell, Python, or Bash. Experience with DevOps tools: Jenkins, Git, Docker, Kubernetes, Ansible, Terraform, etc. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Strong understanding of networking, server administration, and system security practices. Excellent problem-solving skills and attention to detail. Preferred: Certifications such as AWS Certified DevOps Engineer, Microsoft Certified: Azure Administrator, or Red Hat Certified Engineer (RHCE). Experience with ServiceNow or other ITSM tools for workflow automation. Knowledge of Agile/Scrum methodology. Soft Skills: Strong analytical and communication skills. Ability to work independently and as part of a team. Time management and prioritization capabilities. Willingness to learn and adapt in a fast-paced environment.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Dear Candidate Greetings from TATA Consultancy Services Job Openings at TCS Skill : Network Architect Exp range : 11 yrs to 16 yrs Role : Permanent Role Job location : PAN INDIA Current location : Anywhere In India Interview mode : MS Teams Pls find the Job Description below. Skills : Orchestration, Network Automation, uCPE (Hyper-converged appliance), Linux bridge (openswtich) Virtual network (neutron), SDWAN, Firewall, Router Juniper NFX devices Juniper JNCIA/JNCIS/JNCIP certified Candidate should have sound knowledge and L3 level working experience with SDWAN deployments, CIsco vRouters, Juniper SRX, Fortinet, Palo Alto Firewalls. Expert in TCP/IP L2 protocols, L3 protocols, OSPF, BGP, MP-BGP config and troubleshooting Candidate should have good understanding and experience with various Hypervisors, vSwitch, uCPE appliances. Candidate should have familiarity of handing various Orchestrators (to manage Network appliances). Candidate should have sound understanding of Network Automation possibilities and design a process flow. Familiarity of any of the following skillsets will be beneficial - Linux shell script, Power shell, Python, Ansible, Go & Terraform. If you are Interested in the above opportunity kindly share your updated resume to r.shruthi13@tcs.com immediately with the details below (Mandatory) Name: Contact No. Email id: Total exp : Relevant Exp : Current organization : Current CTC : Expected CTC : Notice period :
Posted 1 week ago
12.0 - 15.0 years
0 Lacs
Delhi, India
On-site
Greetings from TATA Consultancy Services Job Openings at TCS Skill : NETWORK ARCHITECT Exp range :12-15 YEARS Role : Permanent Role Job location :DELHI Current location : Anywhere In India Pls find the Job Description below. Orchestration, Network Automation, uCPE (Hyper-converged appliance), Linux bridge (openswtich) Virtual network (neutron), SDWAN, Firewall, Router Juniper NFX devices Juniper JNCIA/JNCIS/JNCIP certified Candidate should have sound knowledge and L3 level working experience with SDWAN deployments, CIsco vRouters, Juniper SRX, Fortinet, Palo Alto Firewalls. Expert in TCP/IP L2 protocols, L3 protocols, OSPF, BGP, MP-BGP config and troubleshooting Candidate should have good understanding and experience with various Hypervisors, vSwitch, uCPE appliances. Candidate should have familiarity of handing various Orchestrators (to manage Network appliances). Candidate should have sound understanding of Network Automation possibilities and design a process flow. Familiarity of any of the following skillsets will be beneficial - Linux shell script, Power shell, Python, Ansible, Go & Terraform. Thanks & Regards Priyanka Talent Acquisition Group Tata Consultancy Services
Posted 1 week ago
7.0 years
0 Lacs
Thane, Maharashtra, India
On-site
About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. We are looking for Principal Architect, please find the JD below Principal Architect - Automation Location Bangalore (Whitefield) or Chennai (Siruseri) or Pune (Talawade) or Mumbai (Airoli (West). Type of Hire Full-Time Job Description The Architect is responsible to serve as a technical architect on large-scale enterprise architecture (EA) projects focused on tooling and automation solutions within the Hybrid Cloud and Infrastructure area. This position will work with Atos Global Tooling & Automation team & Pre-sales Architects to provide consultation and strategic guidance to clients and other senior staff in addressing complex enterprise–level systems engineering and integration challenges. The role of the Architect is to bridge the communication gap between the technical engineering team and the non-technical specialists to drive successful implementation of the tooling scope. Through collaboration with teams involved from bid to end delivery, the Architect will cover the risk of tooling misalignment with stakeholder requirements and ensure that the solution fits the defined purpose. Key Duties And Responsibilities Architectural Designs Producing High Level Designs describing the proposed end-to-end architecture and Producing Low Level Designs describing in detail the proposed implementation, ensuring that designs meet both the business requirements and platform strategy. Project Management Working with the Project Manager to identify the scope, deliverable, key tasks, and dependencies between tasks. Workshop Management Leading workshops with stakeholders including technical and non-technical participants to understanding problem statements, requirements, produce designs or review technical solutions. Technical Leadership Working with the Head of architecture to manage demand and architecture reviews. Evaluating, challenging, and recommending improvements to proposed technical solutions submitted by technical teams. Ensuring that designs and implementations proposed align with the platform engineering strategy. Contribute to the overall solution strategies and roadmaps. Requirement Analysis Working with stakeholders to gather, analyse, and document business requirements, translating them into functional specifications. Understanding the current mode of operation (CMO) including process, data, people, technology. Governance and Compliance Ensure that the designs proposed comply with organizational policies, standards, and regulatory requirements including security. Stakeholder Communication Communicate technical concepts and project status to non-technical stakeholders, providing clear and concise updates. Act as a facilitator to bring together different stakeholder and experts to produce an end-to-end solution. Training & Mentorship Provide training and mentorship to development teams and junior architects, fostering a culture of continuous learning and development. Technical Understanding And Experience Good Technical Understanding and Experience of DevOps technologies including version control (e.g. Git), Build automation tools (e.g. Argo CD, GitHub Actions), development automation (Kubernetes, Docker, etc.) including Crossplane. Good Technical Understanding and Experience of Cloud technologies (e.g. AWS, Azure, Google, etc.) Good Technical Understanding and Experience of Virtualization (e.g. VMWare) Good Technical Understanding and Experience of Network technologies Good Technical Understanding and Experience of Security Products and the principles of good security practice Good Technical Understanding and Experience of Automation Tooling solutions (e.g. Ansible, TSSA, etc.) Good Technical Understanding and Experience of AI/ML and GenAI and the applications of these to IT Operations Good Technical Understanding and Experience of Monitoring Tools and AI/OPs Good Technical Understanding and Experience of IT Service Management Systems (e.g. ServiceNow) Good Technical Understanding and Experience of Infrastructure technologies Soft Skills Strong verbal and written communication skills in English Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Effective project management and organizational skills. Ability to work collaboratively in a team environment. Ability to work with a high degree of autonomy. Strong customer and stakeholder skills Qualifications At least 7 years’ experience working as an architect is expected. A bachelor’s degree in computer science, information technology or related field. Any relevant certificates to prove competency in areas specified. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. Job Description The Senior Python Software Engineer is responsible for designing, implementing and testing software solutions, automating delivery processes and continuously improving the system. Key Responsibilities Design and develop Python applications to meet project requirements adhering to high quality and performance standards Ensure the code is written according to best practices. Write and maintain clear and concise documentation for the codebase Automating repetitive tasks to streamline the software delivery process Facilitating communication and collaboration with other stakeholders Creating and managing CI/CD pipelines to ensure frequent and comprehensive code testing and timely, reliable releases. Integrating security practices throughout the software development lifecycle, including responding to security incidents and vulnerabilities. Assist with DevOps activities to accelerate the software delivery process. Level 2 support to cloud solutions to investigate issues and find timely solutions. Stay up-to-date on industry best practices and new technologies Technical Skills Advanced knowledge at least of Python and Bash and familiarity with other programming languages like Java and Lua. Knowledge of web frameworks such as FastApi, Flask or others. Knowledge of design patters, object oriented and functional programming Understanding of relational and non-relational databases Expertise with tools like Terraform, Helm, Ansible or Puppet. Knowledge of CI/CD platform such as Jenkins and Github Actions. Understanding of Docker and K8s Knowledge and previous experience on any cloud platforms like AWS, Azure, and Google Cloud Platform (GCP). Soft Skills Problem-Solving: The ability to think critically, analyse problems, and devise effective solutions. Communication: Strong communication skills to explain technical concepts and build consensus within the team Adaptability: The willingness to learn, experiment, and embrace change as part of the continuous improvement process. Unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Category: Devops Job Type: Full Time Job Location: Bengaluru Years of Experience: 2 - 3 years Share Job Title: AWS DevOps Engineer Location: Bangalore Experience: 2 to 3 years Company: WizzyBox Private Ltd. Employment Type: Full-Time About WizzyBox Private Ltd. WizzyBox is a fast-growing technology solutions provider specializing in QA, Development, and DevOps Automation. We partner with enterprises and startups to drive digital transformation through cutting-edge tools and scalable cloud-native solutions. Our culture thrives on innovation, learning, and collaboration. Job Description We are seeking a passionate and skilled AWS DevOps Engineer with 2–3 years of hands-on experience to join our dynamic engineering team. The ideal candidate will be responsible for building and managing scalable infrastructure, automating deployment processes, and maintaining robust CI/CD pipelines. Key Responsibilities Design, deploy, and manage AWS cloud infrastructure. Automate provisioning and configuration using Infrastructure-as-Code tools. Set up, monitor, and manage CI/CD pipelines. Collaborate with development teams to ensure smooth code deployments. Administer and troubleshoot Kubernetes clusters and Docker containers. Manage Git repositories, branching strategies, and code versioning. Maintain system availability, performance, and security. Configure and manage RedHat-based systems and services. Required Skills Cloud Platforms: AWS (EC2, S3, IAM, RDS, CloudWatch, etc.) DevOps Tools: Jenkins, Git, GitHub/GitLab, Maven, Ansible/Terraform Containers & Orchestration: Docker, Kubernetes Linux Administration: RedHat / CentOS CI/CD: End-to-end pipeline creation and maintenance Version Control: Git Strong troubleshooting and problem-solving skills Good communication and documentation abilities Nice To Have AWS Certification (Associate or Professional level) Experience with monitoring tools like Prometheus, Grafana, ELK Knowledge of scripting languages (Shell, Python) Why Join WizzyBox? Dynamic startup environment with a focus on innovation Opportunities to work on diverse cloud-native projects Continuous learning and career growth Collaborative and inclusive team culture Apply for this job Use the form below to submit your job application First Name (required) Middle Name Last Name (required) Email (required) Mobile Phone (required) Additional Documents Experience (Years) Experience (Months) Current Salary Expected Salary Available To Join (in days) Preferred Location Select an optionBengaluruMumbaiMysuruChennaiHyderabad Current Location Last Working Day Skills Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. *
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... You will contribute to creating, operating and supporting Cloud and DevOps Platform based Products/Services for mission and business-critical applications within Verizon. Under the mentorship of senior applications staff, you will work as a junior team member on various products. Utilizing knowledge to provision, deploy and configure GCP services as per solution architecture using CI/CD or GitOps automation. Managing, troubleshooting and automating fixes for GCP apps and working on the activities to support it. Good understanding on GenAI usecases, prompt engineering. Expertise in Agentic workflows, langchain implementation Defining and building CI/CD pipeline for application. Expertise in Python and its usage in cloud automation efforts What We’re Looking For... You are curious about new technologies and the possibilities they create. You are driven and motivated to thrive in a dynamic work environment with different stakeholders/architects/business clients across large enterprise systems. You are good at analyzing business requirements and translating them into system requirements using the lens of customer experience. You are great at working in teams and can use your interpersonal skills to get your point across. You'll Need To Have Bachelor’s degree or one or more years of relevant experience required, demonstrated through work experience and/or military experience. One or more years of relevant work experience in Linux platform. Experience in Python , Ansible & AWS Cloud. Knowledge of Agile & Scrum. Experience managing Web Servers and App Servers running on Linux with administration experience. Experience developing code in at least one high-level programming language. Experience working on IT DevOps tooling and CI/CD practices. Even better if you have one or more of the following: Experience with a high-performance, high-availability environment development. Experience managing DevOps products like, Gitlab, Jira, Jenkins, Artifactory etc. Excellent communication and presentation skills. Technology & Cloud certification. AWS Certified SysOps Administrator Associate certification. Knowledge of AWS EKS, ECS, Fargate and Lambda services. One year or more years of experience as a systems administrator in a systems operation’s role. Understanding of provisioning, operating, and managing AWS environments. Experience with the AWS CLI and SDKs/API tools. Understanding of network technologies and concepts as they relate to AWS. Understanding of security concepts with hands-on experience in implementing security controls and compliance requirements. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 week ago
5.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Key Responsibilities: You will provide leadership in designing and implementing groundbreaking GPU computers that run demanding deep learning, high-performance computing, and computationally intensive workloads. We seek an expert to identify architectural changes and/or completely new approaches for accelerating our deep learning models. As an expert, you will help us with the strategic challenges we encounter, including compute, networking, and storage design for large scale, high-performance workloads, effective resource utilization in a heterogeneous computing environment, evolving our private/public cloud strategy, capacity modelling, and growth planning across our products and services. As an architect you are responsible for converting business needs associated with AI-ML algorithms in to a set of product goals covering workload scenarios, end user expectations, compute infrastructure and time of execution; this should lead to a plan for making the algorithms production ready. Benchmark and optimise the Computer Vision Algorithms and the Hardware Accelerators for performance and quality KPIs. Optimize algorithms for optimal performance on the GPU tensor cores. Collaborate with various teams to drive an end to end workflow from data curation and training to performance optimization and deployment. Provide technical leadership and expertise for project deliverables. Leading, mentoring and managing the technical team. Key Qualifications: MS or PhD in Computer Science, Electrical Engineering, or related field. A strong background in deployment of complex deep learning architectures. 5+ years of relevant experience in at least a few of the following relevant areas is required in your work history: Machine learning (with focus on Deep Neural Networks), including understanding of DL fundamentals; Experience adapting and training DNNs for various tasks; Experience developing code for one or more of the DNN training frameworks (such as Caffe, TensorFlow or Torch): Numerical analysis, Performance analysis, Model compression and Optimization & Computer architecture. Strong Data structures and Algorithms know-how with Excellent C/C++ programming skills. Hands-on expertise with PyTorch, TensorRT, CuDNN. Hand-on expertise with GPU computing (CUDA, OpenCL, OpenACC) and HPC (MPI, OpenMP). In-depth understanding of container technologies like Docker, Singularity, Shifter, Charliecloud. Proficient in Python programming and bash scripting. Proficient in Windows, Ubuntu and Centos operating systems. Excellent communication and collaboration skills. Self-motivated and able to find creative practical solutions to problems. Good to have Hands-on experience with HPC cluster job schedulers such as Kubernetes, SLURM, LSF. Familiarity with cloud computing architectures. Hands-on experience with Software Defined Networking and HPC cluster networking. Working knowledge of cluster configuration management tools such as Ansible, Puppet, Salt. Understanding of fast, distributed storage systems and Linux file systems for HPC workload
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Description: We are seeking a highly skilled and versatile IT Engineer to manage and maintain our infrastructure systems and enterprise applications. The ideal candidate will have hands-on experience with cloud platforms (AWS), microservices architecture, system management tools, and both Linux and Windows environments. This role requires deep technical knowledge, excellent problem-solving skills, and the ability to handle system-level responsibilities, backups, and security policies effectively. Key Responsibilities: Cloud & DevOps: - Manage and maintain AWS infrastructure including EC2, S3, IAM, RDS, and networking. Deploy and manage microservices using Docker and Kubernetes. - Handle container orchestration and automation scripts (CI/CD pipelines). - Set up and manage Nginx and Apache2 servers. System & Server Management: - Oversee installation, configuration, and support of Windows and Ubuntu systems. - Maintain and monitor local servers and ensure high availability. - Implement security updates, patches, and hardening procedures. - Handle system inventory and hardware/software lifecycle management. Backup & Recovery: - Manage automated data and database backup solutions. - Perform regular backup tests and ensure disaster recovery plans are in place. - Employee Support & Policy Management: - Define and implement system usage policies for employees. - Provide support for user access, system issues, and software installations. - Manage user onboarding/offboarding in IT systems. Monitoring & Optimization: - Monitor server performance and troubleshoot issues proactively. - Implement tools for system and network health checks. - Ensure optimal performance of infrastructure and services. Required Skills & Qualifications: - Proven experience with AWS and cloud infrastructure management. - Strong knowledge of Docker, Kubernetes, and microservices architecture. - Expertise in server configuration: Apache2, Nginx. - Hands-on experience with both Windows and Ubuntu OS environments. - Solid understanding of networking, firewalls, VPNs, DNS, and system security. - Familiarity with database backup and restoration (MySQL/PostgreSQL/MongoDB). - Experience with system inventory and asset tracking tools. - Good documentation and policy drafting abilities. - Excellent problem-solving and time management skills. Preferred Qualifications: - AWS Certified Solutions Architect or equivalent certification. - Experience with tools like Ansible, Terraform, Jenkins, Git. - Prior experience in managing internal employee systems or ERP. Experience: 3-5 Years Location: Indore
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: DevOps Engineer Experience: 5 to 9 Years Company Name : Incedo Technology Role Overview: We are looking for a skilled DevOps Engineer with 5–9 years of hands-on experience in cloud infrastructure, automation, and deployment pipelines. The ideal candidate should have strong expertise in Terraform , AWS , EKS , Liquibase , and Kafka , with solid scripting and automation skills. Key Responsibilities: As a Technical Lead - DevOps Process at Incedo, you will be responsible for streamlining and optimizing the software development and deployment processes. You will work with development and operations teams to identify bottlenecks and inefficiencies in the development process and implement solutions to improve efficiency and reduce time-to-market. You will be skilled in tools such as Jenkins, Ansible, or Docker and have experience with continuous integration and continuous deployment (CI/CD) methodologies. Roles & Responsibilities: • Developing and implementing DevOps processes and methodologies • Collaborating with other teams to integrate DevOps into business strategies and applications • Providing guidance and mentorship to junior DevOps process specialists • Troubleshooting and resolving DevOps issues • Staying up-to-date with industry trends and best practices in DevOps Skills Requirements: • Experience with DevOps tools such as Jenkins, Git, or Docker. • Understanding of agile software development methodologies and continuous integration/continuous deployment (CI/CD) pipelines. • Familiarity with cloud infrastructure technologies such as load balancers, auto-scaling, or containers. • Knowledge of automation scripting and configuration management tools such as Ansible, Chef, or Puppet. • Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. • Must understand the company's long-term vision and align with it. • Should be open to new ideas and be willing to learn and develop new skills. • Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications • 5-9 years of work experience in relevant field • B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Devops Lead YOE:8+ Years Location: Chennai Notice Period: Immediate to 30Days Job Description: What does a successful Senior DevOps Engineer do at Fiserv? This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives. What will you do: · Build, manage, and deploy CI/CD pipelines. · DevOps Engineer - Helm Chart, Rundesk, Openshift · Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline. · Implementing various development, testing, automation tools, and IT infrastructure · Optimize and automate release/development cycles and processes. · Be part of and help promote our DevOps culture. · Identify and implement continuous improvements to the development practice What you must have: · 3+ years of experience in devops with hands-on experience in the following: - Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks - Building docker images and running/managing docker instances - Building Jenkins pipelines using groovy scripts - Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes · Has good understanding on infrastructure as code · Ability to write and update documentation · Demonstrate a logical, process orientated approach to problems and troubleshooting · Ability to collaborate with multi development teams Thanks & Regards, Yuvaraj U Client Engagement Executive | Vy Systems Mobile: 9150019640 | Email: yuvaraj@vysystems.com www.vysystems.com
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Haryana, India
On-site
Job Title: DevOps Engineer Experience: 5 to 9 YearsCompany Name : Incedo Technology Role Overview: We are looking for a skilled DevOps Engineer with 5-9 years of hands-on experience in cloud infrastructure, automation, and deployment pipelines. The ideal candidate should have strong expertise in Terraform , AWS , EKS , Liquibase , and Kafka , with solid scripting and automation skills. Key Responsibilities: As a Technical Lead - DevOps Process at Incedo, you will be responsible for streamlining and optimizing the software development and deployment processes. You will work with development and operations teams to identify bottlenecks and inefficiencies in the development process and implement solutions to improve efficiency and reduce time-to-market. You will be skilled in tools such as Jenkins, Ansible, or Docker and have experience with continuous integration and continuous deployment (CI/CD) methodologies. Roles & Responsibilities: Developing and implementing DevOps processes and methodologies Collaborating with other teams to integrate DevOps into business strategies and applications Providing guidance and mentorship to junior DevOps process specialists Troubleshooting and resolving DevOps issues Staying up-to-date with industry trends and best practices in DevOpsSkills Requirements: Experience with DevOps tools such as Jenkins, Git, or Docker Understanding of agile software development methodologies and continuous integration/continuous deployment (CI/CD) pipelines Familiarity with cloud infrastructure technologies such as load balancers, auto-scaling, or containers Knowledge of automation scripting and configuration management tools such as Ansible, Chef, or Puppet Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner Must understand the company's long-term vision and align with it Should be open to new ideas and be willing to learn and develop new skills Should also be able to work well under pressure and manage multiple tasks and priorities Qualifications 5-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. Job Description We are looking for a Senior Software Engineer in Test for our Infinia Storage engineering team who will design, build and deliver strategic and modern components of the Quality Engineering process and infrastructure, for the Infinia engineering organization. This crucial member of QE will rely on their software engineering skills to develop software automation tools and infrastructure to enable repeatable and extensible test and customer-focused simulated scenarios, for all phases of testing data storage/intelligence subsystems. They will evaluate, recommend, and implement methods to expedite testing cycles and improve overall test and product quality. Drive design discussions, build prototypes and contribute to delivering high quality Test and Engineering Velocity solutions Conduct code reviews and improve scalability, stability and reliability of test Collaborate with teammates as a unified group of passionate engineers in an outcome- oriented environment. Design test plans, and test cases, leveraging test automation Review acceptance criteria, reviews test cases and test automation code, sets up environments, and communicates regularly to project team on defects, issues, and QE status Coordinate testing effort with other project teams across the globe Work closely with the Test Architect, Test Leads, Test Engineers and Developers/Designers to understand and provide solutions for any challenges that could impact the delivery of test automation. Participation in a team on-call rotation providing seven-day week out of hours coverage, including the provision of after-hours and weekend support work when required. Qualifications: BS/MS/Ph.D in Computer Science, Computer Engineering, Statistics, Mathematics or equivalent degree/experience. 6+ years of Software Development or Software Development for Test experience, preferably in technology domains involving distributed concurrent systems, data or storage systems. Experience with QE methodology, functional and structural testing techniques (Agile a plus) Must have experience in python or related modern high-level languages, pytest and bash. Experience in ansible is a plus. Good problem solving, organizational, interpersonal and team skills Ability to work seamlessly as part of a multi-site, multicultural engineering team Self-motivated, passionate, and driven to achieve committed milestones Strong team player with excellent written and verbal skills Ability to work in a fast-paced development environment with a broad scope of evolving responsibilities Preferred Experience Ability to read and understand coding languages and logic including C++ and GoLang Knowledge of parallel file system solutions (Lustre, GPFS), NVM storage technology or distributed key-value storage systems Knowledge of Object Storage and its usage High Performance Computing system installation and management is helpful to perform day-to-day activities for this role This position requires participation in an on-call rotation to provide after-hours support as needed. DDN Our team is highly motivated and focused on engineering excellence. We look for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates. Interview Process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 30-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews: Coding assessment in a language of your choice. Systems design: Translate high-level requirements into a scalable, fault-tolerant service. Systems hands-on: Demonstrate practical skills in a live problem-solving session. Project deep-dive: Present your past exceptional work to a small audience. Meet and greet with the wider team. Our goal is to finish the main process within one week. We don’t rely on recruiters for assessments. Every application is reviewed by a member of our technical team. DataDirect Networks, Inc. is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, gender expression, transgender, sex stereotyping, sexual orientation, national origin, disability, protected Veteran Status, or any other characteristic protected by applicable federal, state, or local law.
Posted 1 week ago
8.0 years
0 Lacs
Delhi, India
On-site
About NomiSo India : Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management. Our mission is to Empower and Enhance the lives of our customers through simple solutions for their complex business problems. At NomiSo, we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace thrives on ideas and opportunities. That is a part of our DNA. We’re in pursuit of colleagues who share similar passions, are nimble, and thrive when challenged. We offer a positive, stimulating, and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged. We invite you to push your boundaries and join us in fulfilling your career aspirations! What You Can Expect from Us: We work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the centre of everything we do at all levels of the company. Let’s make your career great! Position Overview: We are looking for hands-on L3 Support for the challenging and fun filled work of building a workflow automation system for simplifying current manual work. Roles and Responsibilities: Install, configure, and maintain Openshift environments Ensure high availability and reliability of the Openshift platform Monitor and optimize cluster performance Hands on Exp on ODF (Ceph storage ) Implement security best practices within the cluster Troubleshoot and resolve issues within the Openshift environment Collaborate with development teams for seamless application deployment Document procedures and provide training to team members Conduct regular backups and disaster recovery operations Must Have Skills: 8-12+ years of experience in administrating Kubernetes or Openshift environments. Strong understanding of containerization technologies Experience with CI/CD tools and practices Knowledge of networking and security within containerized environments Excellent troubleshooting and problem-solving skills Strong written and verbal communication skills Core Tools & Technology Stack Red Hat OpenShift , ODF Ceph storage , Loki stack, Container ,Docker,CI/CD pipelines,Ansible,Linux administration,Git,Prometheus/Grafana,sdn, OVN kubernets ,Networking,Shell scripting. Qualification: ● BE/B.Tech or equivalent degree in Computer Science or related field. Location: Delhi- NCR
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Udaipur, Rajasthan
On-site
Location: Udaipur (Full-Time | In-Office) Experience: 2–4 Years (Preferred) Type: Full-Time, Permanent Role Overview: We are looking for a skilled and motivated DevOps Developer/Engineer to join our team in Udaipur. The ideal candidate will be responsible for automating infrastructure, deploying applications, monitoring systems, and improving development and operational processes across the organization. Key Responsibilities: Design, implement, and manage CI/CD pipelines by using tools such as Jenkins, GitHub Actions, or GitLab CI. Deploy, manage & automate infrastructure provisioning using tools like Terraform, Ansible, or similar. Deploy and monitor cloud infrastructure (AWS, Azure, or GCP) Work with containerization tools like Docker and Kubernetes Collaborate with cross functional teams to ensure smooth code releases and deployments. Monitor application performance, troubleshoot issues, and improve deployment processes. Implement security and backup best practices across the infrastructure. Stay up to date with the latest DevOps trends, tools, and best practices. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in DevOps. Proficiency in cloud platforms like AWS, Azure, or GCP. Strong scripting skills (Bash, Python, or similar). Good understanding of system/network administration and monitoring tools. Experience working in Agile/Scrum environments. Familiarity with microservices architecture. Knowledge of security and compliance standards in cloud environments. If you’re ready to take the next step in your DevOps career, apply now. Job Types: Full-time, Permanent Benefits: Flexible schedule Paid time off Ability to commute/relocate: Udaipur City, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Experience: DevOps: 2 years (Preferred) Location: Udaipur City, Rajasthan (Preferred) Work Location: In person
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: L3 Engineer – Data Center (Application Centric Infrastructure) OFFROLL 🧠 Experience Level: 6 to 10 years 🖥️ Domain: Data Center / Cisco ACI / Network Infrastructure 💼 Employment Type: Full-Time | Permanent 🧩 About the Role: We’re looking for a highly skilled L3 Engineer with deep expertise in Cisco Application Centric Infrastructure (ACI) to join our data center operations team. You’ll play a crucial role in managing and optimizing large-scale ACI deployments, supporting mission-critical infrastructure, and acting as the escalation point for complex issues. 🔧 Key Responsibilities: Manage and troubleshoot complex Cisco ACI environments (Tenants, EPGs, Contracts, L4-L7 services). Operate and maintain Cisco Nexus switches (5K/7K/9K series) in a data center fabric. Design, implement, and support VXLAN-based overlays and EVPN in the data center. Manage and optimize SD-WAN environments , ensuring secure and resilient branch-to-core connectivity. Configure and troubleshoot BGP and OSPF routing protocols in spine-leaf and enterprise environments. Perform root cause analysis for high-severity incidents and provide long-term fixes. Collaborate with architecture teams to improve ACI policy model, segmentation, and fabric upgrades. Support integration of virtualization (e.g., VMware) and security solutions with ACI. Contribute to network automation using tools like Python, Ansible, REST APIs (optional but preferred). Maintain documentation including network diagrams, SOPs, change records, and incident reports. Participate in on-call rotation for P1/P2 issues and change implementations. ✅ Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 6–10 years of experience in data center networking with hands-on ACI experience. 📩 Apply Now If you're ready to take your ACI skills to the next level and be part of a forward-thinking infrastructure team, apply today or send your CV to [hr.telecom1@oacplgroup.com] #Hiring #CiscoACI #DataCenterJobs #NetworkingJobs #L3Engineer #InfrastructureEngineer #NetworkArchitect #ACI #CiscoJobs #TechJobs #CCNP #CCIE #DataCenterEngineering
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our team members are at the heart of everything we do. At Cencora, we are united in our responsibility to create healthier futures, and every person here is essential to us being able to deliver on that purpose. If you want to make a difference at the center of health, come join our innovative company and help us improve the lives of people and animals everywhere. Apply today! Job Details Cencora is looking for a Mid-Level SQL Server Database Administrator to join our Data Warehouse Team in our 3rd Party Logistics Division. This role is a 40% SQL Server / Windows Administration and 40% supporting day-to-day operational change requests and 20% New Development work. Working closely with the Data Warehouse, Business Intelligence, EDI and Account Management teams; lessons learned from this activity will provide a foundation for growth for your career goals and exciting opportunities to take our operations farther with exciting technologies and methods. If you love a fast-paced and challenging work environment with many future world-wide support opportunities, you may be our ideal candidate. Shift Time: 2:00 PM to 11:00 PM IST Primary Duties And Responsibilities Hands-On experience in Microsoft SQL Server Installation, configuration, performance tuning, maintenance, and Database Administration on Production Servers Build out new code management, release and control procedures Design and Implement High Availability and Disaster recovery Solutions Maintain backup & recovery plans Develop centralized performance and security monitoring methods Troubleshoot SSIS package failures Setup up new inbound and outbound file processing request. Participate in On-Call rotation schedule Perform multiple windows server and SQL upgrades / migrations Work with supporting vendors, application owners, infrastructure teams. Be well organized and focused with good communication skills. Work with Windows environment for better for SQL server compliance. Contribute to new cloud platform choices Requirements 6+ years - SQL Server – T-SQL 4+ years - SSIS Development and support 4+ years - SQL Server Administration 4+ years - Windows Server Administration 4+ years - Data Warehouse Environment One of the following PowerShell 3+ Years C# 3+ Years Nice To Have 3rd Party Logistics Experience is a Major Plus PowerShell AS400, RPG Knowledge Windows Server Administration Azure Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology or any other related discipline or equivalent related experience. Preferred Certifications Salesforce Certified Administrator Microsoft Certified Systems Administrator (MCSA) Microsoft Certified IT Professional ITIL, ITSM Certifications Work Experience 2+ years of directly-related or relevant experience, preferably in application support or system/application/database administration. Skills & Knowledge Behavioral Skills: Critical Thinking Detail Oriented Interpersonal Communication Learning Agility Problem Solving Time Management Technical Skills Identity & Access Management Database Administration IT Support like Software & Hardware Installation, Troubleshooting Software Validation Systems Integration IT Regulatory Compliance like SOX Compliance Tools Knowledge Software Configuration Management Tools like Ansible, Puppet Citrix technologies like XenDesktop, XenApp, XenServer Operating Systems & Servers like Windows, Linux, Citrix, IBM, Oracle, SQL Enterprise Resource Planning (ERP) Systems like Sage, ASW, SAP Software like Case Management System, HR Information Systems, Kronos(Timekeeping Software), PHS Health and Safety Management System Java Frameworks like JDBC, Spring, ORM Solutions, JPA, JEE, JMS, Gradle, Object Oriented Design Microsoft Office Suite Relational Database Management System (RDBMS) Software Customer Relationship Management (CRM) Systems like Salesforce Marketing Cloud, Sales Cloud Internet Protocols like DNS, HTTP, LDAP, SMTP, Easy DNS, No IP What Cencora offers Benefit offerings outside the US may vary by country and will be aligned to local market practice. The eligibility and effective date may differ for some benefits and for team members covered under collective bargaining agreements. Full time Affiliated Companies Affiliated Companies: Integrated Commercialization, LLC Equal Employment Opportunity Cencora is committed to providing equal employment opportunity without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status or membership in any other class protected by federal, state or local law. The company’s continued success depends on the full and effective utilization of qualified individuals. Therefore, harassment is prohibited and all matters related to recruiting, training, compensation, benefits, promotions and transfers comply with equal opportunity principles and are non-discriminatory. Cencora is committed to providing reasonable accommodations to individuals with disabilities during the employment process which are consistent with legal requirements. If you wish to request an accommodation while seeking employment, please call 888.692.2272 or email hrsc@cencora.com. We will make accommodation determinations on a request-by-request basis. Messages and emails regarding anything other than accommodations requests will not be returned
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: DevOps Lead Skills: DevOps Engineer with OpenShift EXP: 8-12Yrs Location: Chennai NP: Preferred immediate joiners JD What will you do: · Build, manage, and deploy CI/CD pipelines. · DevOps Engineer - Helm Chart, Rundesk, Openshift · Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline. · Implementing various development, testing, automation tools, and IT infrastructure · Optimize and automate release/development cycles and processes. · Be part of and help promote our DevOps culture. · Identify and implement continuous improvements to the development practice What you must have: · 3+ years of experience in devops with hands-on experience in the following: - Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks - Building docker images and running/managing docker instances - Building Jenkins pipelines using groovy scripts - Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes · Has good understanding on infrastructure as code · Ability to write and update documentation · Demonstrate a logical, process orientated approach to problems and troubleshooting · Ability to collaborate with multi development teams What you are preferred to have: · 8+ years of development experience · Jenkins administration experience · Hands-on experience in building and deploying helm charts Process Skills: · Should have worked in Agile Project
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
HCLTech is hiring ML Ops Engineer for Chennai location. Job Overview: We are looking for an experienced MLOps Engineer to help deploy, scale, and manage machine learning models in production environments. You will work closely with data scientists and engineering teams to automate the machine learning lifecycle, optimize model performance, and ensure smooth integration with data pipelines. Experience Required : 6 to 10 yrs Location: Chennai Notice Period: Immediate/ 30 days Key Responsibilities: Transform prototypes into production-grade models Assist in building and maintaining machine learning pipelines and infrastructure across cloud platforms such as AWS, Azure, and GCP. Develop REST APIs or FastAPI services for model serving, enabling real-time predictions and integration with other applications. Collaborate with data scientists to design and develop drift detection and accuracy measurements for live models deployed. Collaborate with data governance and technical teams to ensure compliance with engineering standards. Maintain models in production Collaborate with data scientists and engineers to deploy, monitor, update, and manage models in production. Manage the full CI/CD cycle for live models, including testing and deployment. Develop logging, alerting, and mitigation strategies for handling model errors and optimize performance. Troubleshoot and resolve issues related to ML model deployment and performance. Support both batch and real-time integrations for model inference, ensuring models are accessible through APIs or scheduled batch jobs, depending on use case. Contribute to AI platform and engineering practices Contribute to the development and maintenance of the AI infrastructure, ensuring the models are scalable, secure, and optimized for performance. Collaborate with the team to establish best practices for model deployment, version control, monitoring, and continuous integration/continuous deployment (CI/CD). Drive the adoption of modern AI/ML engineering practices and help enhance the team’s MLOps capabilities. Develop and maintain Flask or FastAPI-based microservices for serving models and managing model APIs. Minimum Required Skills: Bachelor's degree in computer science, analytics, mathematics, statistics. Strong experience in Python, SQL, Pyspark. Solid understanding and knowledge of containerization technologies (Docker, Podman, Kubernetes). Proficient in CI/CD pipelines, model monitoring, and MLOps platforms (e.g., AWS SageMaker, Azure ML, MLFlow). Proficiency in cloud platforms, specifically AWS, Azure and GCP. Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn. Familiarity with batch processing integration for large-scale data pipelines. Experience with serving models using FastAPI, Flask, or similar frameworks for real-time inference. Certifications in AWS, Azure or ML technologies are a plus. Experience with Databricks is highly valued. Strong problem-solving and analytical skills. Ability to work in a team-oriented, collaborative environment. Tools and Technologies: Model Development & Tracking: TensorFlow, PyTorch, scikit-learn, MLflow, Weights & Biases Model Packaging & Serving: Docker, Kubernetes, FastAPI, Flask, ONNX, TorchScript CI/CD & Pipelines: GitHub Actions, GitLab CI, Jenkins, ZenML, Kubeflow Pipelines, Metaflow Infrastructure & Orchestration: Terraform, Ansible, Apache Airflow, Prefect Cloud & Deployment: AWS, GCP, Azure, Serverless (Lambda, Cloud Functions) Monitoring & Logging: Prometheus, Grafana, ELK Stack, WhyLabs, Evidently AI, Arize Testing & Validation: Pytest, unittest, Pydantic, Great Expectations Feature Store & Data Handling: Feast, Tecton, Hopsworks, Pandas, Spark, Dask Message Brokers & Data Streams: Kafka, Redis Streams Vector DB & LLM Integrations (optional): Pinecone, FAISS, Weaviate, LangChain, LlamaIndex, PromptLayer Interested candidates, kindly share their resumes on paridhnya_dhawankar@hcltech.com with below details. Overall Experience: Current and Expected CTC: Current and Preferred Location: Notice Period:
Posted 1 week ago
16.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Cloud4C seeks a Presales professional with proven high-level expertise in driving Cloud Managed Services business and revenue growth. Qualified candidates need to have a successful track record of creating solution architecture for managed services and cloud offerings to enterprise accounts and mid-size companies. Candidates must be able to build relationships with IT Decision Makers and CxO levels of organizations. Job Responsibilities Responsible for gathering/capturing Business requirements from Customer by coordinating with Customer and Sales Team Understanding customers AS-IS and TO-BE Infrastructure/Application/Security requirements on Cloud Understanding RFP/RFI to capture business requirements and prepare for Technical Response Responsible for creating Optimized Solutions (Sizing, Architecture options, BoM etc.) on Public Cloud platforms (Azure, AWS, GCP) based on customer requirements Customer Presentations/White boarding of Cloud4C capabilities and Solution prepared with ability to explain benefits of TO-BE model Creating Proof of Concept (PoC) Creation of Cloud Migration Plan Participation in public events and writing Technical whitepapers/Articles/Blogs Internal Trainings Help Delivery teams in knowledge transition Job Prerequisites Prior experience on Azure, AWS or GCP public cloud is mandatory (At least one of them). We are looking for candidates having 16+ years of IT/Datacenter experience with minimum 6+ years on cloud Infrastructure/Application as SME/Architect on one of the public cloud Preferably certified on one of the Cloud Technology (Expert/Associate level) Experience/Understanding of architecting complex Enterprise grade solutions in on-premise and cloud environments Experience of creating solutions either on Cloud infra IaaS (VM, Storage, Networking, SAP on Cloud etc.) or in Cloud Apps (PaaS, Serverless, Container, CI/CD) Firm grasp on cloud security, leveraging Windows/Linux operating systems, Active Directory, AD integration Well versed in designing and building solutions that include high availability, Disaster Recovery architectures. Familiarity with Cloud Automation Tools (Terraform, ANSIBLE, ARM Templates) Client-focused with the ability to influence others to achieve results Design state-of-the-art technical solutions on cloud that address customer’s requirements for scalability, reliability, security, and performance and leverage existing investments Collaborate with other project teams on technical solutions and help improve service. Ability to learn and work with new and emerging technologies (AI/ML/IoT etc.) Open for local/global travel (30-40%) to client/partner/Event offices/locations as per business requirements
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career Our Data & Analytics group is responsible for working with various business owners/stakeholders from Sales, Marketing, People, GCS, Infosec, Operations, and Finance to solve complex business problems which will have a direct impact on the metrics defined to showcase the progress of Palo Alto Networks. We leverage the latest technologies from the Cloud & Big Data ecosystem to improve business outcomes and create through prototyping, Proof-of-Concept projects and application development. We are looking for Data Platform Engineer with extensive experience in data engineering, cloud infrastructure, and a strong background in DevOps, SRE, or system engineering. The ideal candidate will be responsible for designing, implementing, and maintaining the scalable, reliable platforms and data transformations that support our business objectives. This role requires a deep understanding of both data engineering principles and platform automation, as well as the ability to collaborate with cross-functional teams to deliver high-quality data solutions Your Impact Design, develop, and maintain data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse or data lake environment. Automate, manage, and scale the underlying infrastructure for our data platforms (e.g., Airflow, Spark clusters), applying SRE and DevOps best practices for performance, reliability, and observability. Collaborate with stakeholders to gather requirements and translate business needs into robust technical and platform solutions. Optimize and tune existing data pipelines and infrastructure for performance, cost, and scalability. Implement and enforce data quality and governance processes to ensure data accuracy, consistency, and compliance with regulatory standards. Work closely with the BI team to design and develop dashboards, reports, and analytical tools that provide actionable insights to stakeholders. Mentor junior members of the team and provide guidance on best practices for data engineering, platform development, and DevOps. (Nice-to-have) Aptitude for proactively identifying and implementing GenAI-driven solutions to achieve measurable improvements in the reliability and performance of data pipelines or to optimize key processes like data quality validation and root cause analysis for data issues. Qualifications Your Experience Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience in data engineering, platform engineering, or a similar role, with a strong focus on building and maintaining data pipelines and the underlying infrastructure. Must have proven experience in a DevOps, SRE, or System Engineering role, with hands-on expertise in infrastructure as code (e.g., Terraform, Ansible), CI/CD pipelines, and monitoring/observability tools. Expertise in SQL programming and database management systems (e.g., BigQuery). Hands-on experience with ETL tools and technologies (e.g., Apache Spark, Apache Airflow). Experience with cloud platforms such as Google Cloud Platform (GCP), and experience with relevant services (e.g., GCP Dataflow, GCP DataProc, BigQuery, Cloud Composer, GKE). Experience with Big Data tools like Spark, Kafka, etc. Experience with object-oriented/object function scripting languages: Python/Scala, etc. (Nice-to-have) Demonstrated readiness to leverage GenAI tools to enhance efficiency within the typical stages of the data engineering lifecycle, for example by generating complex SQL queries, creating initial Python/Spark script structures, or auto-generating pipeline documentation. (Plus) Experience with BI tools and visualization platforms (e.g., Tableau). (Plus) Experience with SAP HANA, SAP BW, SAP ECC, or other SAP modules. Strong analytical and problem-solving skills, with the ability to analyze complex data sets and derive actionable insights. Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Additional Information The Team Working at a high-tech cybersecurity company within Information Technology is a once-in-a-lifetime opportunity. You’ll join the brightest minds in technology, creating, building, and supporting tools and enabling our global teams on the front line of defense against cyberattacks. We’re connected by one mission but driven by the impact of that mission and what it means to protect our way of life in the digital age. Join a dynamic and fast-paced team of people who feel excited by the prospect of a challenge and feel a thrill at resolving technical gaps that inhibit productivity. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |