Jobs
Interviews

2294 Vpc Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is seeking a DevOps Manager who combines strong technical expertise with leadership abilities to scale our DevOps practices and infrastructure. You will lead a team of engineers focused on automation, system scalability, security, and CI/CD delivery — while actively exploring AI-based innovations (AIOps, LLMs) to drive predictive monitoring, auto-remediation, and intelligent alerting Key Responsibilities : 🔧 DevOps & Cloud Infrastructure Design, implement, and manage secure, scalable infrastructure across AWS/Azure/GCP. Drive cost optimization, performance tuning, and disaster recovery strategies. Lead adoption of best practices across high-availability and fault-tolerant systems. ⚙️ Containerization & Orchestration Manage containerized environments using Docker, Helm, and Kubernetes (EKS/Rancher/OCP). Ensure secure, reliable orchestration and performance monitoring at scale. 📦 Infrastructure as Code (IaC) Oversee the implementation and maintenance of IaC using Terraform, Ansible, or equivalent tools. Ensure all configurations are version-controlled and environment-consistent. 🚀 CI/CD Automation Architect and continuously improve CI/CD pipelines using Jenkins, ArgoCD, Tekton, etc. Enable fast, secure, and high-quality code delivery in coordination with development and QA. 🛠️ Scripting & Automation Guide scripting efforts in Python or Shell or Ansible or similar to automate deployment, scaling, and incident response. Identify opportunities to eliminate manual interventions. 📊 Observability & Monitoring Define and implement robust logging, monitoring, and alerting systems using Prometheus, Grafana, ELK, CloudWatch, etc. Drive an AI-driven approach to predictive analytics and anomaly detection. 🤖 AI-Driven DevOps (AIOps) Explore and integrate AI/ML solutions such as LLMs and GPTs for intelligent insights, chatOps, and self-healing infrastructure. Drive POCs and deployment of tools like Moogsoft, Datadog AI, Dynatrace, etc. 🧠 Team Leadership & Collaboration Lead a team of DevOps engineers; set goals, mentor, review performance, and drive continuous improvement. Collaborate cross-functionally with product, engineering, QA, and security teams to align on DevOps objectives. Job Requirement 6–10 years of hands-on DevOps/SRE experience, with at least 2 years in a leadership or managerial role. Strong cloud experience with AWS and/or Azure, including services like EC2, S3, RDS, VPC, IAM, Lambda. Expertise in Docker, Kubernetes, Helm, and related tools in production environments. Proficient in Terraform, Ansible, and CI/CD tools (Jenkins, ArgoCD, Tekton). Excellent scripting ability in Python, Shell, or similar. Strong troubleshooting, analytical thinking, and incident management. Experience managing monitoring/logging stacks (Prometheus, Grafana, ELK, CloudWatch). Exposure to LLMs, AIOps, or AI-based DevOps practices is highly desirable. Proven experience in leading projects, mentoring engineers, and stakeholder communication. Preferred Qualifications Relevant certifications: AWS Certified Solutions Architect Certified Kubernetes Administrator (CKA) Terraform Associate Knowledge of HA Proxy, Nginx, CDNs. Understanding of DevSecOps and infrastructure security. Familiarity with integrating LLMs, GPTs, and AI for infrastructure or developer tooling enhancements. Five Reasons Why You Should Join Zycus : Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is seeking a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. The ideal candidate will possess a deep understanding of DevOps principles , extensive hands-on experience with AWS, Kubernetes, CI/CD pipelines , and a strong background in cloud infrastructure management . As a Senior DevOps Engineer, you will play a pivotal role in optimizing our development and deployment processes, ensuring seamless integration and delivery of our software products. Key Responsibilities : Cloud Services: Design, deploy, and manage scalable, reliable, and secure cloud infrastructure using AWS/Azure/GCP services. Optimize and automate cloud environments to support development, testing, and production workloads. Container Technologies: Implement, manage, and scale containerized applications using Docker. Ensure best practices in container security, orchestration, and management. Kubernetes: Deploy, manage, and troubleshoot Kubernetes clusters, using EKS/Rancher/OCP. Optimize Kubernetes environments for performance, scalability, and reliability. Implement CI/CD pipelines using Kubernetes-native tools. Infrastructure as Code (IaC): Develop, manage, and maintain IaC using tools such as Terraform and Ansible. Ensure infrastructure configurations are version-controlled, repeatable, and auditable. Scripting and Automation: Write and maintain scripts (Python, Shell) for automation of infrastructure management, deployment processes, and monitoring. Automate repetitive tasks to improve efficiency and reduce manual intervention. Problem Solving and Analytical Skills: Diagnose, troubleshoot, and resolve complex issues related to infrastructure, applications, and deployments. Provide technical expertise and leadership in resolving critical production issues. Logging and Monitoring: Implement, manage, and optimize logging and monitoring solutions using tools like ELK stack, Prometheus, Grafana, and CloudWatch. Ensure high availability, performance, and security of monitoring systems. Web Layer Technologies: Configure and manage web layer components such as HA Proxy, Nginx, and CDN solutions. Optimize web layer for performance, security, and scalability. Collaboration and Communication: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of applications. Communicate effectively with stakeholders to understand requirements and provide updates on progress. Job Requirement Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Experience: 2-5 years of experience in a DevOps. AWS Proficiency: Strong hands-on experience with a wide range of AWS services including EC2, S3, RDS, VPC, IAM, CloudFormation, and Lambda. Container Expertise: Proven experience with Docker and Kubernetes on Production Systems. Knowledge of Helm, Istio, or other Kubernetes-related tools is a plus. IaC Tools: Proficiency with Terraform and Ansible. Scripting Skills: Strong scripting skills in Python and Shell. Experience with other programming languages is a plus. Monitoring and Logging: Experience with ELK stack, Prometheus, Grafana, CloudWatch, Datadog or similar tools. Web Layer Knowledge: Experience with web layer technologies such as HA Proxy, Nginx, and CDN solutions. Leadership Skills: Demonstrated ability to lead projects independently from inception to delivery. Experience mentoring junior team members and providing technical leadership. Certifications: Relevant certifications (e.g., AWS Certified Solutions Architect, Certified Kubernetes Administrator, Terraform Certification) are a plus and will be considered an additional bonus. Preferred Skills : CI/CD Pipelines: Experience with setting up and managing CI/CD pipelines using Jenkins, Argocd, Tekton or similar tools. Security Best Practices: Understanding of security best practices for cloud infrastructure, containers, and IaC. Soft Skills: Strong communication skills, ability to work in a team environment, and a proactive attitude towards problem-solving and process improvement. Knowledge of LLMs and GPTs: Familiarity with large language models (LLMs) and Generative Pre-trained Transformers (GPTs). Understanding their applications, limitations, and integration with existing infrastructure for automated solutions and AI-driven insights. Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The Principal Platform Architect – MuleSoft is responsible for leading the strategic design, implementation, and governance of scalable integration solutions using the MuleSoft Anypoint Platform. This role involves defining the integration architecture vision, creating implementation roadmaps, and ensuring alignment with enterprise integration and API standards. You will provide technical leadership and mentorship to the MuleSoft Platform team, ensuring high-quality design and delivery of integration capabilities. The Architect will collaborate with senior Technology leadership and cross-functional stakeholders to build a robust, reusable, and innovative API ecosystem. The role also includes oversight of platform governance and driving continuous improvement across integration practices and team performance. Roles & Responsibilities: Lead a team of MuleSoft Platform Architects, providing technical leadership, performance management, and career development Develop and maintain the Enterprise MuleSoft architecture vision and strategy, ensuring alignment with business objectives Create and maintain architectural roadmaps that guide the evolution of applications and capabilities Oversee delivery of reusable APIs, templates, connectors, and accelerators to speed project delivery and reduce technical debt. Establish and enforce architectural standards, policies, and governance frameworks Evaluate emerging technologies and assess their potential impact on the solution architecture Identify and mitigate architectural risks, ensuring that the MuleSoft ecosystem is scalable, secure, and resilient Define and enforce best practices around DevOps, CI/CD, automated testing, API versioning, security, and governance Drive platform adoption and maturity through onboarding, documentation, training, and evangelism Maintain comprehensive documentation of the architecture, including principles, standards, and models Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Work with stakeholders to gather and analyze requirements, ensuring that solutions meet both business and technical needs Evaluate and recommend technologies and tools that best fit the solution requirements Ensure seamless integration between systems and platforms, both within the organization and with external partners Design systems that can scale to meet growing business needs and performance demands Basic Qualifications and Experience: Master’s / Bachelor’s degree with 13-16 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills: Strong architectural design and modeling skills Extensive knowledge of enterprise architecture frameworks and methodologies Experience with system integration, IT infrastructure Experience directing solution design, business processes redesign and aligning business requirements to technical solutions in a regulated environment Experience working in agile methodology, including Product Teams and Product Development models Strong solution design and problem-solving skills Integration Architect API Design: Ability to design and develop well-structured and documented APIs. Integration Patterns: Understanding of common integration patterns and their application. Integration Technologies: Familiarity with integration platforms, protocols, and standards (e.g., ESB, REST, SOAP), specifically MuleSoft. Data Mapping: Experience in mapping data between different systems and applications. Security: Understanding of API security best practices and standards. Good-to-Have Skills: Hands-on Linux administration and scripting skills (bash, shell, or Python) Familiarity with Amazon Web Services (AWS) networking and infrastructure, especially VPC design and security Experience in the pharmaceutical or life sciences industry with regulated data environments Agile delivery experience and proficiency with tools like JIRA, Confluence, and ServiceNow Professional Certifications Mulesoft Integration Architect I or Mulesoft Platform Architect I Certifications (required) Cloud certifications (AWS Certified Solutions Architect, DevOps Engineer, etc.) (required) Linux Administration or DevOps-related certifications a plus (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Python AWS Engineer GCL: D1 Introduction to role This is an outstanding opportunity for a senior engineer to advance modern software development practices within our team (DevOps/CI/CD/automated testing), building a bespoke integrated software framework (on-premise/cloud/COTS) which will accelerate the ability of AZ scientists to develop new drug candidates for unmet patient needs. To achieve this goal, we need a strong senior individual to work with teams of engineers, as well as engage and influence other global teams within Solution Delivery to ensure that our priorities are aligned with the needs of our science. The successful candidate will be a hands-on coder, passionate about software development and also willing to coach and enable wider teams to grow and expand their software delivery capabilities and skills. Accountabilities The role will encompass a variety of approaches with the aim of simplifying and streamlining scientific workflows, data, and applications, while advancing the use of AI and automation for use by scientists. Working alongside platform lead, architect, BA, and informaticians you will be working to understand, devise technical solutions, estimate and deliver and run operationally sustainable platform software. You need to use your technical acumen to determine an optimal balance between COTS and home-grown solutions and own their lifecycles and roadmap. Our delivery teams are distributed across multiple locations and as Senior Engineer you will need to coordinate activities of technical internal and contract employees. You must be capable of working with others, driving ownership of solutions, showing humility while striving to enable the development of platform technical team members in our journey. You will raise expectations within the whole team, solve complex technical problems and work alongside complementary delivery platforms while aligning solutions with scientific and data strategies and target architecture. Essential Skills/Experience 7 -10 years of experience in working with Python. Proven experience with Python for data manipulation and analysis. Strong proficiency in SQL and experience with relational databases. In-depth knowledge and hands-on experience with various AWS services (S3, Glue, VPC, Lambda Functions, Batch, Step Functions, ECS). Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK and CloudFormation. Experience with Snowflake or other data warehousing solutions. Knowledge of CI/CD processes and tools, specifically Jenkins and Docker. Experience with big data technologies such as Apache Spark or Hadoop is a plus. Strong analytical and problem-solving skills, with the ability to work independently and as part of a team. Excellent communication skills and ability to collaborate with cross-functional teams. Familiarity with data governance and compliance standards. Experience with process tools like JIRA, Confluence Experience of building unit tests, integration tests, system tests and acceptance tests Good team player, and the attitude to work with the highest integrity. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Here you can innovate, take ownership, explore new solutions, experiment with leading-edge technology, and tackle challenges in a modern technology environment. Ready to make an impact? Apply now! Date Posted 14-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Key Responsibilities: Investigates product problems, understands causal mechanisms, recommends appropriate action, owns problem resolution and documents results with guidance from more experienced team members. Main focus will include working in business processes of Product Preceding Technology (PPT), Value Package Introduction (VPI) or Current Product Support (CPS) and executing technical processes such as Engineering Standard Work (ESW), iDFMEA, Failure Incident Review Group (FIRG) while using tools such as 7-step problem solving, design review checklist and other specialized tools required to support the processes and enable high quality decision making. Obtains input from stakeholders such as technical managers, project leaders, other product and manufacturing engineers and supplier partners to deliver information and recommendations that lead to quality product decisions. Applies academic knowledge and existing experience to take action and make decisions that progress projects forward without sacrificing project quality expectations. Examples of these decisions include day to day project details, analysis or test work instruction details, coordination across discipline areas that are necessary to make quality progress. Owns problem resolution for moderately complex components, products, systems, subsystems or services with technical complexity and ambiguity increasing as experience is gained in the role. Provides independent execution of established work processes and systems, while still developing technology or product knowledge; engages with the improvement of systems and processes. Involves minimal direct management of people, but could involve the coordination and direction of work amongst technicians and/or temporary student employees. Contributes effectively toward team goals, exhibits influence within a work group and continues to develop proficiency in the competency areas critical to success in the role. Responsibilities Competencies: Applies Principles of Statistical Methods - Analyzes technical data using descriptive statistics, probability distributions, graphical analysis, and statistical inference (population and sample, confidence intervals, and hypothesis testing); models relationships between response and independent variables using analysis of variance, regression, and design of experiments to make rigorous, data-based decisions. Cross-Functional Design Integration - Translates the value package requirements that include the voices of many stakeholders into virtual designs, and communicates the capability of the design through an approved cross-functional design review. Design and Application of Open/Closed Loop Controls - Specifies software features that interact with mechanical, hydraulic, chemical and electronic systems to deliver desired system states; specifies control system architectures which include appropriate measurements, correct actuation, and algorithms for Cummins' products; configures and/or understands open/closed loop feedback controls features and the system interactions between hardware and software in Cummins' products. Mechanical Design of Mechanical Systems - Acquires and applies an in-depth understanding of mechanical systems through working knowledge that guides a designer’s ability to create innovative and sound design concepts to meet Cummins and customer expectations; designs for requirements of all lifecycle stages by considering the customer requirements in different operating environments to ensure a robust system. Mechanical Design Specification - Creates complete specifications in the form of solid models, configured engineering bill of materials and detailed drawings that cross-functionally communicate the information required to manufacture and inspect a product per its design intent; considers national, international, industry, and Cummins’ standards that accurately and concisely define the part specification. Product Configuration and Change Management - Establishes a baseline of identified product artifacts to be placed under configuration management; releases, tracks, controls and communicates changes from concept to obsolescence often through work requests; establishes and maintains the integrity of the product artifact baselines. Product Development Execution, Monitoring and Control - Plans, schedules, coordinates and executes the activities involved in developing a product to a respectively aligned hierarchy of requirements and technical profiles; monitors and communicates across functional boundaries to meet project resource and quality expectations; ensures product capability meets or exceeds expectations and takes mitigating actions when project risks are higher than expected; understands the full product life cycle process and stakeholders. Product Failure Mode Avoidance - Mitigates potential product failure modes, by identifying interfaces, functions, functional requirements, interactions, control factors, noise factors, and prioritized potential failure modes and potential failure causes for the system of interest to effectively and efficiently improve the reliability of Cummins’ products. Product Function Modeling, Simulation and Analysis - Impacts product design decisions through the utilization and/or interpretation of computational tools and methods that predict the capability of a product's function relative to its system, sub-system and/or component level requirements. Product Interface Management and Integration - Identifies and analyzes the interfaces and interactions across system boundaries by specifying the requirements and limits to ensure that the product meets requirements; controls the interactions across the system element boundaries by making sure that they remain within specified limits; integrates system elements by creating an integration plan, including identification of method and timing for each activity to make it easier to find, isolate, diagnose, and correct. Product Problem Solving - Solves product problems using a process that protects the customer; determines the assignable cause; implements robust, data-based solutions; and identifies the systemic root causes and recommended actions to prevent problem reoccurrence. Product Verification and Validation Management - Develops product systems validation plans from a variety of inputs to identify failure modes, while managing product risk and relative priority; negotiates product requirements against capability to guide project scope; evaluates analytical, simulation and physical test results to verify product capability and validate requirements; assesses legacy versus proposed system solution capabilities and produces recommendations with technical documentation to support product decisions. System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Systems Thinking - Defines the system of interest by drawing the boundaries, identifying its context within its environment, its interfaces, and that it has a lifecycle to aid in planning the problem statement, scope and deliverables ; analyzes linkages and interactions between elements that comprise the system of interest by using appropriate methods, models and integration of outcomes to understand the system, predict its behavior and devise modifications to it in order to produce the desired effects. Technical Documentation - Documents information based on knowledge gained as part of technical function activities; communicates to stakeholders with the goal of enabling improved technical productivity and effective knowledge transfer to others who were not originally part of the initial learning. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Decision quality - Making good and timely decisions that keep the organization moving forward. Drives results - Consistently achieving results, even under tough circumstances. Self-development - Actively seeking new ways to grow and be challenged using both formal and informal development channels. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent Bachelor's degree in Engineering or appropriate STEM field is required. Post-graduate (Master's) degree relevant to this discipline area may be required for select roles. This position may require licensing for compliance with export controls or sanctions regulations. Experience Entry level/Early career professional. Preferred candidates would have relevant experience working in either a temporary student employment environment (intern, co-op, or other extracurricular team activities) or as an early career professional in a relevant technical discipline area. Knowledge of MS Office tools is also preferred Qualifications Diploma or bachelor's degree in electrical or Electronics Engineering. Must have experience working with electrical rotating machines in electromagnetic design and development Knowledge of IEC/IS standards is essential. Preferred: Familiarity with high-voltage electrical products. Experience working with cross-functional teams is required. 1 ~ 2 Years of working experience in engineering Independently manage design/VPC projects

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Google Cloud Engineer at our company, you will play a crucial role in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). Your collaboration with development, operations, and security teams will ensure that our cloud environment is scalable, secure, highly available, and cost-optimized. If you are enthusiastic about cloud-native technologies, automation, and overcoming intricate infrastructure challenges, we welcome you to apply. Your responsibilities will include: - Designing, implementing, and managing robust, scalable, and secure cloud infrastructure on GCP utilizing Infrastructure as Code (IaC) tools like Terraform. - Deploying, configuring, and managing core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components. - Developing and maintaining CI/CD pipelines for automated deployment and release management using various tools. - Implementing and enforcing security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. - Monitoring cloud infrastructure and application performance, identifying bottlenecks, and implementing optimization solutions. - Troubleshooting and resolving complex infrastructure and application issues in production and non-production environments. - Collaborating with development teams to ensure cloud-native deployment, scalability, and resilience of applications. - Participating in on-call rotations for critical incident response and timely resolution of production issues. - Creating and maintaining comprehensive documentation for cloud architecture, configurations, and operational procedures. - Keeping up-to-date with new GCP services, features, and industry best practices to propose and implement improvements. - Contributing to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. We require you to have: - A Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. - 6+ years of experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. - 3+ years of experience with cloud platforms, preferably GCP, including designing and deploying cloud-native applications. - 3+ years of experience with source code management, CI/CD pipelines, and Infrastructure as Code. - Strong experience with Javascript and a modern Javascript framework, with VueJS preferred. - Proven leadership and mentoring skills with development teams. - Strong understanding of microservices architecture and serverless computing. - Experience with relational databases like SQL Server and PostgreSQL. - Excellent problem-solving, analytical, and communication skills, along with Agile/Scrum environment experience. What can make you stand out: - GCP Cloud Certification. - UI development experience with HTML, JavaScript, Angular, and Bootstrap. - Agile environment experience with Scrum, XP. - Relational database experience with SQL Server, PostgreSQL. - Proficiency in Atlassian tools like JIRA, Confluence, and Github. - Working knowledge of Python and exceptional problem-solving and analytical abilities, along with strong teamwork skills.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this role should possess deep technical knowledge and understanding of L2/L3 networking protocols and technologies, including VPC, STP, HSRP, OSPF, EIGRP, and BGP. Additionally, they should have strong Nexus 2K/5K/7K/9K platform knowledge and hands-on experience. Moreover, the candidate should have knowledge of overlay technologies such as BGP EVPN, OTV, and LISP. It is essential for the candidate to understand Cisco SDN-based technologies like Cisco ACI, Nexus, Cisco Programmable Network Portfolio (Standalone VXLAN-Fabric), and DCNM. Overall, the successful candidate will need to demonstrate expertise in various networking protocols, technologies, and platforms to excel in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Network Routing & Switching professional at our organization, you will be responsible for managing and maintaining network infrastructure. To excel in this role, you should hold a BE/ Diploma in E&C with 3-5 years of relevant experience. Your primary duties will include working with Cisco Nexus 9K series, configuring BGP Protocol, implementing VPC, and managing STP Switching. Previous experience in the BFSI sector would be an added advantage. This position is based in either Bangalore or Mumbai, providing you with the opportunity to contribute to our network operations in a dynamic environment.,

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills And Qualifications 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. About Us We help consult and deliver solutions to organizations where data is at the core of decision making. We undertake strategic data consulting for organizations in laying out the roadmap for data driven decision making, in order to equip organizations to convert data into a strategic differentiator. Through a host of our product and service offerings we analyse and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The client is a global leader in delivering cutting-edge inflight entertainment and connectivity (IFEC) solutions. As a developer in this role, you will be responsible for building user interfaces using Flutter, React.js, or similar frontend frameworks. You will also develop backend services and APIs using Python, ensuring smooth data flow between the frontend and backend by working with REST APIs. Additionally, you will utilize Linux terminal and bash scripting for basic automation and tasks, manage code using Git, and set up CI/CD pipelines using tools like GitLab CI/CD. Deployment and management of services on AWS (CloudFormation, Lambda, API Gateway, ECS, VPC, etc.) will be part of your responsibilities. It is essential to write clean, testable, and well-documented code while collaborating with other developers, designers, and product teams. Requirements: - Minimum 3 years of frontend software development experience - Proficiency in GUI development using Flutter or other frontend stacks (e.g., React.js) - 3+ years of Python development experience - Experience with Python for backend and API server - Proficiency in Linux terminal and bash scripting - Familiarity with GitLab CI/CD or other CI/CD tools - AWS experience including CloudFormation, API Gateway, ECS, Lambda, VPC - Bonus: Data science skills with experience in the pandas library - Bonus: Experience with the development of recommendation systems and LLM-based applications If you find this opportunity intriguing and aligning with your expertise, please share your updated CV and relevant details with pearlin.hannah@antal.com.,

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Key Responsibilities: Investigates product problems, understands causal mechanisms, recommends appropriate action, owns problem resolution and documents results with guidance from more experienced team members. Main focus will include working in business processes of Product Preceding Technology (PPT), Value Package Introduction (VPI) or Current Product Support (CPS) and executing technical processes such as Engineering Standard Work (ESW), iDFMEA, Failure Incident Review Group (FIRG) while using tools such as 7-step problem solving, design review checklist and other specialized tools required to support the processes and enable high quality decision making. Obtains input from stakeholders such as technical managers, project leaders, other product and manufacturing engineers and supplier partners to deliver information and recommendations that lead to quality product decisions. Applies academic knowledge and existing experience to take action and make decisions that progress projects forward without sacrificing project quality expectations. Examples of these decisions include day to day project details, analysis or test work instruction details, coordination across discipline areas that are necessary to make quality progress. Owns problem resolution for moderately complex components, products, systems, subsystems or services with technical complexity and ambiguity increasing as experience is gained in the role. Provides independent execution of established work processes and systems, while still developing technology or product knowledge; engages with the improvement of systems and processes. Involves minimal direct management of people, but could involve the coordination and direction of work amongst technicians and/or temporary student employees. Contributes effectively toward team goals, exhibits influence within a work group and continues to develop proficiency in the competency areas critical to success in the role. Responsibilities Competencies: Applies Principles of Statistical Methods - Analyzes technical data using descriptive statistics, probability distributions, graphical analysis, and statistical inference (population and sample, confidence intervals, and hypothesis testing); models relationships between response and independent variables using analysis of variance, regression, and design of experiments to make rigorous, data-based decisions. Cross-Functional Design Integration - Translates the value package requirements that include the voices of many stakeholders into virtual designs, and communicates the capability of the design through an approved cross-functional design review. Design and Application of Open/Closed Loop Controls - Specifies software features that interact with mechanical, hydraulic, chemical and electronic systems to deliver desired system states; specifies control system architectures which include appropriate measurements, correct actuation, and algorithms for Cummins' products; configures and/or understands open/closed loop feedback controls features and the system interactions between hardware and software in Cummins' products. Mechanical Design of Mechanical Systems - Acquires and applies an in-depth understanding of mechanical systems through working knowledge that guides a designer’s ability to create innovative and sound design concepts to meet Cummins and customer expectations; designs for requirements of all lifecycle stages by considering the customer requirements in different operating environments to ensure a robust system. Mechanical Design Specification - Creates complete specifications in the form of solid models, configured engineering bill of materials and detailed drawings that cross-functionally communicate the information required to manufacture and inspect a product per its design intent; considers national, international, industry, and Cummins’ standards that accurately and concisely define the part specification. Product Configuration and Change Management - Establishes a baseline of identified product artifacts to be placed under configuration management; releases, tracks, controls and communicates changes from concept to obsolescence often through work requests; establishes and maintains the integrity of the product artifact baselines. Product Development Execution, Monitoring and Control - Plans, schedules, coordinates and executes the activities involved in developing a product to a respectively aligned hierarchy of requirements and technical profiles; monitors and communicates across functional boundaries to meet project resource and quality expectations; ensures product capability meets or exceeds expectations and takes mitigating actions when project risks are higher than expected; understands the full product life cycle process and stakeholders. Product Failure Mode Avoidance - Mitigates potential product failure modes, by identifying interfaces, functions, functional requirements, interactions, control factors, noise factors, and prioritized potential failure modes and potential failure causes for the system of interest to effectively and efficiently improve the reliability of Cummins’ products. Product Function Modeling, Simulation and Analysis - Impacts product design decisions through the utilization and/or interpretation of computational tools and methods that predict the capability of a product's function relative to its system, sub-system and/or component level requirements. Product Interface Management and Integration - Identifies and analyzes the interfaces and interactions across system boundaries by specifying the requirements and limits to ensure that the product meets requirements; controls the interactions across the system element boundaries by making sure that they remain within specified limits; integrates system elements by creating an integration plan, including identification of method and timing for each activity to make it easier to find, isolate, diagnose, and correct. Product Problem Solving - Solves product problems using a process that protects the customer; determines the assignable cause; implements robust, data-based solutions; and identifies the systemic root causes and recommended actions to prevent problem reoccurrence. Product Verification and Validation Management - Develops product systems validation plans from a variety of inputs to identify failure modes, while managing product risk and relative priority; negotiates product requirements against capability to guide project scope; evaluates analytical, simulation and physical test results to verify product capability and validate requirements; assesses legacy versus proposed system solution capabilities and produces recommendations with technical documentation to support product decisions. System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Systems Thinking - Defines the system of interest by drawing the boundaries, identifying its context within its environment, its interfaces, and that it has a lifecycle to aid in planning the problem statement, scope and deliverables ; analyzes linkages and interactions between elements that comprise the system of interest by using appropriate methods, models and integration of outcomes to understand the system, predict its behavior and devise modifications to it in order to produce the desired effects. Technical Documentation - Documents information based on knowledge gained as part of technical function activities; communicates to stakeholders with the goal of enabling improved technical productivity and effective knowledge transfer to others who were not originally part of the initial learning. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Decision quality - Making good and timely decisions that keep the organization moving forward. Drives results - Consistently achieving results, even under tough circumstances. Self-development - Actively seeking new ways to grow and be challenged using both formal and informal development channels. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent Bachelor's degree in Engineering or appropriate STEM field is required. Post-graduate (Master's) degree relevant to this discipline area may be required for select roles. This position may require licensing for compliance with export controls or sanctions regulations. Experience Entry level/Early career professional. Preferred candidates would have relevant experience working in either a temporary student employment environment (intern, co-op, or other extracurricular team activities) or as an early career professional in a relevant technical discipline area. Knowledge of MS Office tools is also preferred Qualifications Job Specific Requirements:- Diploma or bachelor's degree in electrical or Electronics Engineering. Must have experience working with electrical rotating machines in electromagnetic design and development Knowledge of IEC/IS standards is essential. Preferred: Familiarity with high-voltage electrical products. Experience working with cross-functional teams is required. 1 ~ 2 Years of working experience in engineering Independently manage design/VPC projects Job Engineering Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2417484 Relocation Package Yes

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Role & responsibilities We are looking for an experienced Cloud Engineer with a strong foundation in cloud infrastructure, DevOps, monitoring, and cost optimization. The ideal candidate will be responsible for designing scalable architectures, implementing CI/CD pipelines, and managing secure and efficient cloud environments using AWS, GCP, or Azure. Key Responsibilities : - Design and deploy scalable, secure, and cost-optimized infrastructure across cloud platforms (AWS, GCP, or Azure) - Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions - Set up infrastructure monitoring, alerting, and logging systems (e.g., CloudWatch, Prometheus, Grafana) - Collaborate with development and architecture teams to implement cloud-native solutions - Manage infrastructure security, IAM policies, backups, and disaster recovery strategies - Drive cloud cost control initiatives and resource optimization - Troubleshoot production and staging issues related to infrastructure and deployments Requirements Must-Have Skills: - 5-7 years of experience working with cloud platforms (AWS, GCP, or Azure) - Strong hands-on experience in infrastructure provisioning and automation - Expertise in DevOps tools and practices, especially CI/CD pipelines - Good understanding of network configurations, VPCs, firewalls, IAM, and security best practices - Experience with monitoring and log aggregation tools - Solid knowledge of Linux system administration - Familiarity with Git and version control workflows Good to Have: - Experience with Infrastructure as Code tools (Terraform, CloudFormation, Pulumi) - Working knowledge of Kubernetes or other container orchestration platforms (EKS, GKE, AKS) - Exposure to scripting languages like Python, Bash, or PowerShell - Familiarity with serverless architecture and event-driven designs - Awareness of cloud compliance and governance frameworks Preferred candidate profile

Posted 1 week ago

Apply

10.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python

Posted 1 week ago

Apply

10.0 - 20.0 years

15 - 21 Lacs

Bengaluru

Work from Office

Design and develop cloud-native backend services using AWS Serverless technologies and GoLang . Build and maintain RESTful APIs and event-driven components using Lambda , API Gateway , S3, SQS and other AWS services. Implement data persistence and integration using Amazon RDS (PostgreSQL) and DynamoDB where appropriate. Define and automate Infrastructure as Code (IaC) using AWS CDK / CloudFormation . Design and enforce comprehensive infrastructure and API testing strategies, including unit, integration, and performance testing. Ensure solutions follow the AWS Well-Architected Framework with focus on performance, security, cost, and reliability. Build robust CI/CD pipelines and automate environment provisioning and deployment

Posted 1 week ago

Apply

10.0 - 12.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Job Title Senior Network Architect Job Grade Senior Manager 1 Function Information Technology Sub-function Infra IT Manager’s Job Label Network Architect- Lead Skip Level Manager’s Label Global Head – Infra Operation Function Head Title GM Location: Mumbai No. of Direct Reports (if any) NA Business Unit Areas Of Responsibility At Sun Pharma, we commit to helping you “ Create your own sunshine ”— by fostering an environment where you grow at every step, take charge of your journey and thrive in a supportive community. Are You Ready to Create Your Own Sunshine? As you enter the Sun Pharma world, you’ll find yourself becoming ‘Better every day’ through continuous progress. Exhibit self-drive as you ‘Take charge’ and lead with confidence. Additionally, demonstrate a collaborative spirit, knowing that we ‘Thrive together’ and support each other’s journeys.” Job Summary We are looking for a dynamic and forward-thinking Senior Network Architect to lead the strategy, design, and implementation of our enterprise-wide IT and OT network infrastructure. This role requires a perfect blend of technical expertise, leadership, and project delivery skills , with a focus on cloud connectivity, network security, segmentation, and emerging technologies (SD-WAN, 5G/6G). You will be responsible for designing scalable, secure, and high-performance network architectures that support business growth, compliance, and digital transformation. This role demands a strategic thinker with a deep understanding of networking technologies, protocols, and best practices to support our organization's evolving needs. Responsibilities Architecture, Design & Delivery Lead the end-to-end design of enterprise network architecture, including cloud, data centre, campus, OT, encompassing LAN, WAN, WLAN, SD-WAN, and cloud networking that aligns with business objective Develop High-Level Design (HLD) and Low-Level Design (LLD) documents along with Bill of Materials (BOM) and Bill of Quantities (BOQ). Evaluate and integrate emerging technologies to enhance network performance and security. Design and implement macro and micro segmentation, next-generation firewall architectures, and secure SD-WAN topologies. Architect cloud networking and security solutions (AWS, Azure, GCP) using Transit Gateway, VPC peering, Azure Firewall, etc. Project & Program Management Lead the technical delivery of complex networking projects including cloud integration, OT segmentation, secure remote access, and SD-WAN rollouts. Own project lifecycle from requirement gathering and solutioning to handover and documentation. Define capacity planning models to forecast bandwidth, throughput, and resource utilization. Oversee the deployment of network solutions, ensuring minimal disruption to business operations. Ensure compliance with industry standards and organizational policies during implementation Technology Evaluation, POCs, RFPs & RFIs Evaluate and recommend new technologies, platforms, and OEMs through competitive assessments, RFI/RFP, and Proof of Concept (POC). Drive strategic network transformation initiatives by selecting the most appropriate solutions based on TCO, scalability, and regulatory needs. Design and enforce network security protocols to protect organizational data and resources. Ensure compliance with relevant regulations and standards (e.g., ISO 27001, NIST). Leadership & Vendor Management Lead and mentor a cross-functional team of engineers, architects, and project managers. Manage technical engagements with vendors and partners—ensuring alignment with architecture standards and service levels. Collaborate with cybersecurity, infrastructure, operations, and compliance teams to maintain enterprise governance. Manage and monitor vendor driven agreed SLA’s based parameter set Security, Cloud & OT Integration Architect secure IT and OT connectivity using Zero Trust models , EDR/XDR , NAC , and network segmentation . Design and enforce network security protocols to protect organizational data and resources. Design resilient OT networks that meet ISA/IEC 62443 , NIST , and GxP compliance standards. Collaborate with the security team to address vulnerabilities and implement mitigation strategies. Stakeholder Communication & Presentation Present technical solutions, risks, roadmaps, and architecture proposals to leadership, including CIO, CISO, and steering committees. Translate business goals into network design and infrastructure strategy. Maintain detailed documentation of network configurations, processes, and procedures Provide training and mentorship to junior network staff and other stakeholders. Travel Estimate Job Scope Internal Interactions (within the organization) IT functional team across globe. External Interactions (outside the organization) Vendors and OEM’s Geographical Scope Global Financial Accountability (cost/revenue with exclusive authority) Job Requirements Educational Qualification Bachelor's/Master’s in Computer Science, Engineering, or IT Specific Certification CCNP/CCIE , PCNSE , AWS/Azure Network Specialty , CISSP , TOGAF , PMP/ITIL v4 Experience 10-12 years’ experience Skill (Functional & Behavioural) Networking: BGP, OSPF, VXLAN, SD-WAN, MPLS, 5G/6G, WAN Optimization Cloud Networking: AWS Transit Gateway, Azure VNet, ExpressRoute, Direct Connect, NSG/UDR Security: NGFWs (Palo Alto, Fortinet, Cisco), ZTNA, CASB, Zscaler/Netskope, EDR/XDR (CrowdStrike, Defender), NAC Segmentation: Micro and macro segmentation, VRFs, SGTs, VLANs OT Networking: Industrial firewalling, SCADA/PLC segregation, ICS/OT security policies Your Success Matters to Us At Sun Pharma, your success and well-being are our top priorities! We provide robust benefits and opportunities to foster personal and professional growth. Join us at Sun Pharma, where every day is an opportunity to grow, collaborate, and make a lasting impact. Let’s create a brighter future together! Disclaimer: The preceding job description has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees as assigned to this job. Nothing herein shall preclude the employer from changing these duties from time to time and assigning comparable duties or other duties commensurate with the experience and background of the incumbent(s).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.

Posted 1 week ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Bengaluru

Remote

We are seeking a skilled DevOps Engineer with expertise in AWS, cloud infrastructure, and container technologies. You will enable software development teams to build, test, and deploy features efficiently with a consideration on flexibility. Your drive to innovate and continuously improve, will not only increase the capabilities of the delivery team, but will also play a critical role in identifying technical solutions in our business. Strong stakeholder management and the ability to provide technical guidance to internal teams are essential for success in this role. You will be working for our Australia based client. Established in 2002, they strive to modernize the movement of goods and provide supply chain participants the best on the go IT solutions and services. They support organizations across the globe, connecting people, goods & technology and their mission is to deliver seamless, secure, real-time data fueled connections that power the logistics of delivery. REQUIRED COMPETENCIES: Experience managing and operating Amazon Web Services (AWS) components, including but not limited to IAM, ELB, VPC, Api Gateway, EC2, S3, RDS, EKS, ECS, EFS, Elastic Cache, or Azure equivalent Experience using Linux and scripting languages (Bash, PowerShell or Python) Experience delivering infrastructure solutions as code (Terraform or CloudFormation). Proficiency in managing applications and environment configurations through Ansible or equivalent configuration management tools. Demonstrable experience in implementing containerization strategies using Kubernetes or AWS ECS or similar. Proficient in creating, maintaining, and optimizing Continuous Integration/Continuous Delivery (CI/CD) pipelines leveraging tools such as Git, Jenkins, AWS CodePipeline & CodeBuild. Strong knowledge of IT security practices and networking. Experience or exposure to the following technologies: Jira, Confluence, Git, SonarQube, Azure AD, Datadog/New relic. Experience working with Developers, DevOps, and Engineering teams in a dynamic environment to promote/implement the DevOps program throughout the organisation. DESIRED COMPETENCIES: Experience with software development, programming (C#, Java, .NET, NodeJS, etc.) and microservices architecture. QUALIFICATIONS: Candidate must possess at least a Bachelors/College Degree in Computer Science, Information Technology, Engineering (Computer/Telecommunication), or equivalent experience. More than five years' experience in DevOps Engineering, underpinned by a solid comprehension of the fundamentals of computer science and software engineering principles

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior Data Engineer Employment Type: Full-Time Location: Ahmedabad, Onsite Experience Required: 5+ Years About Techiebutler Techiebutler is looking for an experienced Data Engineer to develop and maintain scalable, secure data solutions. You will collaborate closely with data science, business analytics, and product development teams, deploying cutting-edge technologies and leveraging best-in-class third-party tools. You will also ensure compliance with security, privacy, and regulatory standards while aligning data solutions with industry best practices. Tech Stack Languages: SQL, Python Pipeline Orchestration: Dagster (Legacy: Airflow) Data Stores: Snowflake, Clickhouse Platforms & Services: Docker, Kubernetes PaaS: AWS (ECS/EKS, DMS, Kinesis, Glue, Athena, S3) ETL: FiveTran, DBT IaC: Terraform (with Terragrunt) Key Responsibilities Design, develop, and maintain robust ETL pipelines using SQL and Python. Orchestrate data pipelines using Dagster or Airflow. Collaborate with cross-functional teams to meet data requirements and enable self-service analytics. Ensure seamless data flow using stream, batch, and Change Data Capture (CDC) processes. Use DBT for data transformation and modeling to support business needs. Monitor, troubleshoot, and improve data quality and consistency. Ensure all data solutions adhere to security, privacy, and compliance standards. Essential Experience 5+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Desired Experience Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Why Join Us? Opportunity to work on cutting-edge technologies and innovative data solutions. Be part of a collaborative team focused on delivering high-impact results. Competitive salary and growth opportunities. If you’re passionate about data engineering and want to take your career to the next level, apply now! We look forward to reviewing your application and potentially welcoming you to our team!

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Pune

Hybrid

Greetings from Intelliswift- An LTTS Company Role : Fullstack Developer Work Location:- Pune Experience:- 5+ years Job Description in details: Job Summary Role : Fullstack Developer Experience : 5 to 8 Years Job Location : Pune As a Fullstack Developer specializing in generative AI and cloud technologies, you will design, build, and maintain end-to-end applications on AWS. Youll leverage services such as Bedrock, SageMaker, LangChain and Amplify to integrate AI/ML capabilities, architect scalable infrastructure, and deliver seamless front-end experiences using React. Youll partner with UX/UI designers, ML engineers, DevOps teams, and product stakeholders to take features from concept through production deployment. Job Description: 5+ years of professional experience as a Fullstack Developer building scalable web applications. Proficiency in Python and/or JavaScript/TypeScript; strong command of modern frameworks (React, Node.js). Hands-on AWS expertise: Bedrock, SageMaker, Amplify, Lambda, API Gateway, DynamoDB/RDS, CloudWatch, IAM, VPC. Architect & develop full-stack solutions using React for front-end, Python/Node.js for back-end, and AWS Lambda/API Gateway or containers for serverless services. Integrate Generative AI capabilities leveraging AWS Bedrock, LangChain retrieval-augmented pipelines, and custom prompt engineering to power intelligent assistants and data-driven insights. Design & Manage AWS Infrastructure using CDK/CloudFormation for VPCs, IAM policies, S3, DynamoDB/RDS, ECS/EKS, and Implement DevOps/MLOps Workflows: establish CI/CD pipelines (CodePipeline, CodeBuild, Jenkins), containerization (Docker), automated testing, and rollout strategies. Develop Interactive UIs in React: translate Figma/Sketch designs into responsive components, integrate with backend APIs, and harness AWS Amplify for accelerated feature delivery. Solid understanding of AI/ML concepts, including prompt engineering, generative AI frameworks (LangChain), and model deployment patterns. Experience designing and consuming APIs: RESTful and GraphQL. DevOps/MLOps skills: CI/CD pipeline creation, containerization (Docker), orchestration (ECS/EKS), infrastructure as code. Cloud architecture know-how: security groups, network segmentation, high-availability patterns, cost optimization. Excellent problem-solving ability and strong communication skills to collaborate effectively across distributed teams. Share your updated profiles on shakambnari.nayak@intelliswift.com with details.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Design, implement, and manage scalable, reliable, and secure AWS environments. · Automate deployment, monitoring, and maintenance tasks using infrastructure-as-code tools such as Terraform, CloudFormation, or similar. · Collaborate with software development teams to ensure best practices and optimal deployment strategies are employed. · Monitor system performance, troubleshoot issues, and optimize AWS services and resources. · Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. · Collaborate with cross-functional teams to define, design, and deliver new features and enhancements. · Ensure security best practices and compliance requirements are followed within AWS environments. · Develop and maintain documentation of architecture, processes, and procedures. Qualifications: · Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent experience. · 3-8 years of experience in a DevOps role, with significant exposure to AWS services or similar Azure services · Expertise in AWS services such as EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, etc. or parallel Azure services · Strong experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. · Proven experience in building and maintaining CI/CD pipelines. · Familiarity with containerization and orchestration tools such as Docker and Kubernetes. · Experience with scripting languages such as Python, Bash, or similar. · Excellent problem-solving skills and the ability to work independently or as part of a team. · Strong communication and collaboration skills. Preferred Qualifications: · AWS Certified DevOps Engineer or equivalent AWS certifications. · Experience with configuration management tools like Ansible, Puppet, or Chef. · Knowledge of Agile methodologies and experience working in Agile environments. · Experience with monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch Logs. Mandatory Skill Sets- AWS , Devops, Kubenetes, Jenkins Preferred Skill Sets- Azure, GCP Years of Experience required- 4 - 8 Years Education Qualifications- Btech MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 12 Lacs

Pune

Work from Office

Job Description: As a Senior Cloud Engineer at NCSi, you will play a critical role in designing, implementing, and managing cloud infrastructure that meets our clients' needs. You will work closely with cross-functional teams to architect solutions, optimize existing systems, and ensure security and compliance across cloud environments. This position requires strong technical skills, a deep understanding of cloud services, and an ability to mentor junior engineers. Responsibilities: - Design and implement scalable cloud solutions using AWS, Azure, or Google Cloud platforms. - Manage cloud infrastructure with a focus on security, compliance, and cost optimization. - Collaborate with development and operations teams to streamline CI/CD pipelines for cloud-based applications. - Troubleshoot and resolve cloud service issues and performance bottlenecks. - Develop and maintain documentation for cloud architectures, procedures, and best practices. - Mentor junior engineers and provide technical guidance on cloud technologies and services. - Stay up to date with the latest cloud technologies and industry trends, and recommend improvements for existing infrastructure. Skills and Tools Required: - Strong experience with cloud platforms such as AWS, Azure, or Google Cloud. - Proficiency in cloud infrastructure management tools like Terraform, CloudFormation, or Azure Resource Manager. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of programming/scripting languages such as Python, Go, or Bash for automation purposes. - Experience with monitoring and logging tools like Prometheus, Grafana, or ELK Stack. - Understanding of security best practices for cloud deployments, including IAM, VPC configurations, and data encryption. - Strong problem-solving skills, attention to detail, and ability to work in a collaborative team environment. - Excellent communication skills, both verbal and written, to convey complex technical concepts to non-technical stakeholders. Preferred Qualifications: - Cloud certifications from AWS, Azure, or Google Cloud (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). - Experience with Agile methodologies and DevOps practices. - Familiarity with database technologies, both SQL and NoSQL, as well as serverless architectures. Roles and Responsibilities NA

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities Indicative years of experience: 4-6 years (At-least 2 years of strong AWS hands-on experience) Role Description We are looking for a Senior Software Engineer who can work closely with the team to develop on-prim/cloud solutions using Typescript, Java and other scripting language. The person should be having good exposure to AWS managed service and can pair with Leads for developing cloud-based solutions for customers. Roles also required a good understanding of Extreme engineering practices like TDD, Unit test coverage, Pai-programming, clean code practices etc. Reporting Relationship This role will report to Delivery Manager / Senior Delivery Manager. Key Responsibilities Work independently in developing solutions at AWS and On-prim environment. Work closely with Tech leads for building strong design and engineering practices in the team. Effectively Pair with team members and Tech leads for building or maintaining a strong code Quality framework. Work closely with Scrum master for implementing Agile best practices in the team. Work closely with Product owners for defining the user stories. Work independently on production incidents reported by business partners to provide resolution within defined SLAs, coordinate with other teams as needed. Act as an interface between the business and technical teams and communicate effectively. Document problem resolutions and new learning for future use, update SOPs Monitor system availability and communicate system outages to business and technical teams. Provide support to resolve complex system problems, triage system issues beyond resolution to appropriate technical teams. Assist in analyzing, maintaining, implementing, testing and documenting system changes and fixes. Provide training to new team members and other teams on business processes and applications. Manage the overall software development workflow. Provide permanent resolutions for repeating issues. Build automation for repetitive tasks. Qualifications Skills required: Good exposure on Type script , AWS Cloud Development Core Java, Java 8 frameworks, Java scripting, Expertise on spring boot and Spring MVC. Experience on AWS DB's ecosystem , RDBMS or NoSQL Databases, Good exposure to SQLs. Good Exposure to Extreme engineering practices like TDD, Unit test coverage, Clean code practices, Pai-programming, mobbing, Incremental value delivery etc. Understanding and exposure to microservice architecture. Domain Driven Desging and Federeation exposure would be an addtion. Good Hands-on Experience with the core AWS services (EC2, IAM, ECS, Cloud Formation, VPC, Security Groups, Nat Instance, Autoscaling Lamda, SNS/SQS, S3, Event Driven services etc). Strong notions of security best practices (e.g. using IAM Roles, KMS, etc.). Experience with monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Experience with building or maintaining cloud-native applications. Past experience with the serverless approaches using AWS Lambda is a plus.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies