Jobs
Interviews

219 Cloudformation Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is seeking a Data and AI Principal / Senior Manager (Generative AI) for their Noida location. As a global technology company with a workforce of over 218,000 employees in 59 countries, HCLTech specializes in digital, engineering, cloud, and AI solutions. The company collaborates with clients across various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Retail, and Public Services, offering innovative technology services and products. With consolidated revenues of $13.7 billion as of the 12 months ending September 2024, HCLTech aims to drive progress and transformation for its clients globally. Key Responsibilities: In this role, you will be responsible for providing hands-on technical leadership and oversight, including leading the design of AI and GenAI solutions, machine learning pipelines, and data architectures. You will actively contribute to coding, solution design, and troubleshooting critical components, collaborating with Account Teams, Client Partners, and Domain SMEs to ensure technical solutions align with business needs. Additionally, you will mentor and guide engineers across various functions to foster a collaborative and high-performance team environment. As part of the role, you will design and implement system and API architectures, integrating microservices, RESTful APIs, cloud-based services, and machine learning models seamlessly into GenAI and data platforms. You will lead the integration of AI, GenAI, and Agentic applications, NLP models, and large language models into scalable production systems. You will also architect ETL pipelines, data lakes, and data warehouses using tools like Apache Spark, Airflow, and Google BigQuery, and drive deployment using cloud platforms such as AWS, Azure, and GCP. Furthermore, you will lead the design and deployment of machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn, ensuring accurate and reliable outputs. You will develop prompt engineering techniques for GenAI models and implement best practices for ML model performance monitoring and continuous training. The role also involves expertise in CI/CD pipelines, Infrastructure-as-Code, cloud management, stakeholder communication, agile development, performance optimization, and scalability strategies. Required Qualifications: - 15+ years of hands-on technical experience in software engineering, with at least 5+ years in a leadership role managing cross-functional teams in AI, GenAI, machine learning, data engineering, and cloud infrastructure. - Proficiency in Python and experience with Flask, Django, or FastAPI for API development. - Extensive experience in building and deploying ML models using TensorFlow, PyTorch, scikit-learn, and spaCy, and integrating them into AI frameworks. - Familiarity with ETL pipelines, data lakes, data warehouses, and data processing tools like Apache Spark, Airflow, and Kafka. - Strong expertise in CI/CD pipelines, containerization, Infrastructure-as-Code, and API security for high-traffic systems. If you are interested in this position, please share your profile with the required details including Overall Experience, Skills, Current and Preferred Location, Current and Expected CTC, and Notice Period to paridhnya_dhawankar@hcltech.com.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, our data and analytics team focuses on utilizing data to drive insights and support informed business decisions. We leverage advanced analytics techniques to assist clients in optimizing their operations and achieving strategic goals. As a data analysis professional at PwC, your role will involve utilizing advanced analytical methods to extract insights from large datasets, enabling data-driven decision-making. Your expertise in data manipulation, visualization, and statistical modeling will be pivotal in helping clients solve complex business challenges. PwC US - Acceleration Center is currently seeking a highly skilled MLOps/LLMOps Engineer to play a critical role in deploying, scaling, and maintaining Generative AI models. This position requires close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure the seamless integration and operation of GenAI models within production environments at PwC and for our clients. The ideal candidate will possess a strong background in MLOps practices and a keen interest in Generative AI technologies. With a preference for candidates with 4+ years of hands-on experience, core qualifications for this role include: - 3+ years of experience developing and deploying AI models in production environments, alongside 1 year of working on proofs of concept and prototypes. - Proficiency in software development, including building and maintaining scalable, distributed systems. - Strong programming skills in languages such as Python and familiarity with ML frameworks like TensorFlow and PyTorch. - Knowledge of containerization and orchestration tools like Docker and Kubernetes. - Understanding of cloud platforms such as AWS, GCP, and Azure, including their ML/AI service offerings. - Experience with continuous integration and delivery tools like Jenkins, GitLab CI/CD, or CircleCI. - Familiarity with infrastructure as code tools like Terraform or CloudFormation. Key Responsibilities: - Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. - Design and manage CI/CD pipelines specialized for ML workflows, including deploying generative models like GANs, VAEs, and Transformers. - Monitor and optimize AI model performance in production, utilizing tools for continuous validation, retraining, and A/B testing. - Collaborate with data scientists and ML researchers to translate model requirements into scalable operational frameworks. - Implement best practices for version control, containerization, and orchestration using industry-standard tools. - Ensure compliance with data privacy regulations and company policies during model deployment. - Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. - Stay updated with the latest MLOps and Generative AI developments to enhance AI capabilities. Project Delivery: - Design and implement scalable deployment pipelines for ML/GenAI models to transition them from development to production environments. - Oversee the setup of cloud infrastructure and automated data ingestion pipelines to meet GenAI workload requirements. - Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures. Client Engagement: - Collaborate with clients to understand their business needs and design ML/LLMOps solutions. - Present technical approaches and results to technical and non-technical stakeholders. - Conduct training sessions and workshops for client teams. - Create comprehensive documentation and user guides for clients. Innovation And Knowledge Sharing: - Stay updated with the latest trends in MLOps/LLMOps and Generative AI. - Develop internal tools and frameworks to accelerate model development and deployment. - Mentor junior team members and contribute to technical publications. Professional And Educational Background: - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

As a Senior Software Developer specializing in React, AWS, and DevOps, your role in Perth will involve utilizing your hands-on experience with React/Angular applications. You will be responsible for the setup, maintenance, and enhancement of cloud infrastructure for web applications, leveraging your expertise in AWS Cloud services. Your responsibilities will include understanding and implementing core AWS services, ensuring the application's security and scalability by adhering to best practices. You will be expected to establish and manage the CI/CD pipeline using the AWS CI/CD stack, while also demonstrating proficiency in BDD/TDD methodologies. In addition, your role will require expertise in serverless approaches using AWS Lambda and the ability to write infrastructure as code using tools like CloudFormation. Knowledge of Docker and Kubernetes will be advantageous, along with a strong understanding of security best practices such as IAM Roles and KMS. Furthermore, experience with monitoring solutions like CloudWatch, Prometheus, and the ELK stack will be beneficial in ensuring the performance and reliability of the applications you work on. Additionally, having a good understanding of DevOps practices will be essential for success in this role. If you have any further inquiries or require clarification on any aspect of the job, please do not hesitate to reach out.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 14 Lacs

Hyderabad

Work from Office

Role: AWS Sysops Engineer Total experience Req: 4-8 Years Mandatory skills: AWS, Linux Location: Hyderabad (WFO 5 days) Notice Period: Immediate to 15 days maximum. Interested please reach me out siva.avula@healthfirsttech.com Company Description: Healthfirst Technologies is a pioneering company in product design and development across various domains. We specialize in end-to-end product development and management, with a focus on the healthcare and insurance industries. Our team of experts and over 20 years of strategic product engineering experience enable us to create best-of-breed products that address key business challenges. Requirements: Experience: 8+ years of experience in system administration, cloud infrastructure, and AWS services. Proven experience with AWS services including EC2, S3, RDS, VPC, CloudFormation, Lambda, and IAM. Strong background in Linux/Unix system administration. Certifications: AWS Certified SysOps Administrator Associate or equivalent certification. Additional AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer) are a plus. Technical Skills: Proficiency in scripting languages such as Python, Bash, or PowerShell. Experience with configuration management tools like Ansible, Puppet, or Chef. Knowledge of containerization technologies such as Docker and Kubernetes. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and manage multiple priorities in a fast-paced environment. Preferred Qualifications: Experience with CI/CD pipelines and tools such as Jenkins or AWS CodePipeline. Familiarity with database management and optimization in both relational and NoSQL databases. Understanding of ITIL practices and experience in an IT service management role. Knowledge of network architecture and security principles. Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience in system operations and IT infrastructure management. Strong knowledge of AWS and Azure services. Experience with automation tools and scripting languages. Familiarity with security practices and risk management. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration abilities. Preferred Skills: AWS and Azure certifications. Experience with DevOps practices and tools. Knowledge of network configurations and security protocols

Posted 2 weeks ago

Apply

14.0 - 20.0 years

50 - 55 Lacs

Gurugram, Bengaluru

Work from Office

Experience : 14 to 20 Years Location : Gurugram / Bengaluru Job Type : Full-time Function : IT Architecture / Digital Transformation Job Summary A senior-level opportunity for a Solution Architect to lead the modernization of legacy insurance systems , particularly Ingenium , into next-generation cloud-based platforms. This role requires deep technical and architectural expertise, strong stakeholder engagement, and experience driving large-scale transformation projects. Key Responsibilities Architecture & Design Define target-state architecture for modernization initiatives focusing on core insurance platforms . Translate complex business and functional requirements into scalable, secure, and cost-effective solutions. Develop architecture artifacts and blueprints, ensuring alignment with enterprise standards. Modernization & Transformation Lead the Ingenium transformation journey: current state assessment, target design, and implementation roadmap. Bridge the gap between legacy systems and cloud-native architectures with a clear modernization path. Cloud & Integration Design hybrid and cloud-first solutions using AWS (preferred cloud platform). Ensure seamless integration between on-premise and cloud services . Leverage modern integration patterns, tools, and middleware for scalability and resilience. Stakeholder Engagement Engage with senior stakeholders to present, justify, and refine architecture strategies. Collaborate with cross-functional teamsdevelopers, analysts, project managers, and operationsto ensure delivery aligns with the architecture vision. Delivery Oversight Monitor and support execution to avoid deviations from architecture. Proactively resolve technical challenges and remove impediments in delivery teams. Mandatory Skills & Experience 1420 years of total IT experience, with at least 35 years in an architecture or lead design role. Strong hands-on experience with Ingenium (modernization, customization, or migration). Solid understanding of enterprise architecture principles and legacy-to-cloud transformation . Experience delivering large-scale modernization or digital transformation projects. Expertise in stakeholder communication , especially at leadership and executive levels. Proven experience working in Agile/Scrum or hybrid delivery models. Preferred / Good to Have Background in Insurance domain (Life, Group, or Policy Admin). Experience in designing or implementing solutions on AWS Cloud (certifications a plus). Exposure to CI/CD, DevOps practices , and Infrastructure as Code tools. Knowledge of integration technologies (APIs, middleware, event-driven systems). Education Bachelor’s or Master’s degree in Computer Science, Information Technology, or Engineering . Additional certifications in cloud platforms (AWS) or enterprise architecture (e.g., TOGAF) are a plus. Soft Skills Strong analytical and decision-making capabilities. Effective communicator—able to simplify complex concepts for non-technical audiences. Confident, self-driven, and highly collaborative in diverse environments. Strong leadership with the ability to influence without authority . Communication Scope Regular interaction with CxO-level leaders , business heads, and cross-regional IT teams. Lead architecture reviews, strategy sessions, and solution design workshops. Ensure alignment of architecture with enterprise strategy across geographies and functions. What’s in it for You? Take charge of critical modernization initiatives in a complex, enterprise-scale IT landscape. Collaborate with high-performing global teams on transformative digital journeys . Shape architecture decisions that impact business growth and operational agility. Contact: 9045052073

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Lead Engineer, DevOps at Toyota Connected India, you will be part of a dynamic team that is dedicated to creating innovative infotainment solutions on embedded and cloud platforms. You will play a crucial role in shaping the future of mobility by leveraging your expertise in cloud platforms, containerization, infrastructure automation, scripting languages, monitoring solutions, networking, security best practices, and CI/CD tools. Your responsibilities will include: - Demonstrating hands-on experience with cloud platforms such as AWS or Google Cloud Platform. - Utilizing strong expertise in containerization (e.g., Docker) and Kubernetes for container orchestration. - Implementing infrastructure automation and configuration management tools like Terraform, CloudFormation, Ansible, or similar. - Proficiency in scripting languages such as Python, Bash, or Go for efficient workflow. - Experience with monitoring and logging solutions such as Prometheus, Grafana, ELK Stack, or Datadog to ensure system reliability. - Knowledge of networking concepts, security best practices, and infrastructure monitoring to maintain a secure and stable environment. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, Travis CI, or similar for continuous integration and delivery. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a casual dress code. You will have the opportunity to work on products that enhance the safety and convenience of millions of customers, all within a collaborative, innovative, and empathetic work culture that values customer-centric decision-making, passion for excellence, creativity, and teamwork. Join us at Toyota Connected India and be part of a team that is redefining the automotive industry and making a positive global impact!,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be part of Bytes Technolab, a full-range web application Development Company with a global presence in the USA, Australia, and India, established in 2011. We are known for our excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services. As a Backend Developer, your primary responsibilities will include designing, developing, and maintaining high-performance backend services using Golang. You will be expected to architect and implement microservices following best practices and design patterns. Creating and managing AWS CloudFormation templates for infrastructure automation will also be a key aspect of your role. Working independently on assigned tasks and ensuring timely delivery with minimal supervision is essential. You will collaborate closely with cross-functional teams and directly communicate with clients to understand requirements and present solutions. Conducting code reviews, optimizing application performance, and ensuring code quality will be part of your routine tasks. To excel in this role, you should have a minimum of 5 years of hands-on experience with Golang, PHP, and NodeJs in production environments. A strong understanding and practical application of design patterns are crucial. Solid experience working with AWS services and authoring CloudFormation templates is required. Proven experience in building and maintaining microservices-based systems is highly advantageous. The ability to take full ownership of features from concept to deployment is a key expectation. Excellent communication skills are necessary to effectively engage with clients and stakeholders. Familiarity with API Gateway, Lambda handlers, MySql, and DynamoDB database will be considered a plus. In addition to your technical skills, you will be expected to provide mentorship to junior developers and contribute to technical documentation. This role offers you the opportunity to showcase your expertise in backend development and work on challenging projects in a dynamic environment.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is looking for a Data and AI Principal / Senior Manager (Generative AI) to join their team in Noida. As a global technology company with a strong presence in 59 countries and over 218,000 employees, HCLTech is a leader in digital, engineering, cloud, and AI services. They collaborate with clients in various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Media, Retail, and Public Services. With consolidated revenues of $13.7 billion, HCLTech aims to provide industry-leading capabilities to drive progress for their clients. In this role, you will be responsible for providing hands-on technical leadership and oversight. This includes leading the design of AI, GenAI solutions, machine learning pipelines, and data architectures to ensure performance, scalability, and resilience. You will actively contribute to coding, code reviews, and solution design, while working closely with Account Teams, Client Partners, and Domain SMEs to align technical solutions with business needs. Mentoring and guiding engineers across various functions will be an essential aspect of this role, fostering a collaborative and high-performance team environment. Your role will also involve designing and implementing system and API architectures, integrating AI, GenAI, and Agentic applications into production systems, and architecting ETL pipelines, data lakes, and data warehouses using industry-leading tools. You will drive the deployment and scaling of solutions using cloud platforms like AWS, Azure, and GCP, while leading the integration of machine learning models into end-to-end production workflows. Additionally, you will be responsible for leading CI/CD pipeline efforts, infrastructure automation, and ensuring robust integration with cloud platforms. Stakeholder communication, promoting Agile methodologies, and optimizing performance and scalability of applications will be key responsibilities. The ideal candidate will have at least 15 years of hands-on technical experience in software engineering, with a focus on AI, GenAI, machine learning, data engineering, and cloud infrastructure. If you meet the qualifications and are passionate about driving innovation in AI and data technologies, we invite you to share your profile with us. Kindly email your details to paridhnya_dhawankar@hcltech.com including your overall experience, skills, current and preferred location, current and expected CTC, and notice period. We look forward to hearing from you and exploring the opportunity to work together at HCLTech.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

You will be the Infrastructure & Performance Test Engineer responsible for designing, executing, and optimizing load, stress, and distributed testing strategies across cloud-based systems. Your expertise in HTTP/HTTPS traffic analysis, monitoring tools, and reporting, along with a solid understanding of AWS infrastructure and performance at scale, will be crucial for this role. Your key responsibilities will include planning and conducting Load Testing, Stress Testing, and Distributed Load Testing to simulate real-world traffic patterns. You will create and manage test datasets to ensure accurate simulations and validations, monitor and analyze HTTP/HTTPS calls and system metrics during test execution, and use tools like JMeter, Gatling, k6, or Locust for performance testing. Additionally, you will automate end-to-end test cases using Selenium for UI validation when necessary and collaborate with DevOps to test upgrades and infrastructure changes. You should possess 3-5 years of experience in performance/infrastructure testing or DevOps QA, proficiency with load testing tools such as JMeter, Gatling, k6, familiarity with Selenium for UI test automation, a strong understanding of HTTP/HTTPS protocols and API testing, and experience in AWS infrastructure and monitoring tools like CloudWatch and X-Ray. Moreover, experience in distributed test execution, parallel load generation, checking Latency and response time, and strong scripting skills in Python, Bash, or similar are required. Preferred qualifications include experience in continuous integration environments like Jenkins, GitHub Actions, exposure to Infrastructure as Code (IaC) tools such as Terraform or CloudFormation, and previous experience with major system upgrades and verifying post-upgrade performance baselines.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for architecting and developing scalable backend services using Python (Django/FastAPI) and optimizing database schemas for MongoDB and PostgreSQL. In addition to this, you will implement authentication, authorization, and API security mechanisms while managing and deploying backend applications on AWS. Your role will also involve setting up CI/CD pipelines and leading backend code reviews to enforce quality, performance, and security standards. As a Technical Lead, you will mentor backend engineers, manage version control, and ensure smooth production deployments with zero-downtime practices. The ideal candidate for this position should have strong backend development experience with a focus on Python (Django/FastAPI) and extensive knowledge of MongoDB and PostgreSQL, including schema design and performance tuning. You should have at least 6 years of experience in backend engineering and cloud infrastructure, along with expertise in code reviews, Git-based version control, and CI/CD pipelines. A solid understanding of DevOps tools and AWS services related to backend deployment is essential, as well as a deep grasp of API standards, secure coding practices, and production-grade systems deployments. In addition to technical skills, you should possess proven leadership abilities and experience in mentoring backend teams. Excellent communication skills, strong ownership of backend deliverables, and strategic thinking in system architecture and planning are key attributes for this role. Preferred skills include experience with microservices architecture, containerization tools, asynchronous programming, caching, API performance optimization, monitoring and logging tools, serverless architecture, infrastructure as code, testing frameworks, API documentation tools, and data privacy regulations. A Bachelor's degree in Engineering with a specialization in Computer Science, Artificial Intelligence, Information Technology, or a related field is required for this position. This role offers the flexibility of working remotely in India.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a DevOps Engineer, you will be responsible for defining and implementing DevOps strategies that are in line with the business objectives. You will lead cross-functional teams to enhance collaboration among development, QA, and operations departments. Your role will involve designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate build, test, and deployment processes, thus expediting release cycles. Furthermore, you will be tasked with implementing and overseeing Infrastructure as Code using tools such as Terraform, CloudFormation, Ansible, etc. Managing cloud platforms like AWS, Azure, or Google Cloud will also be a part of your responsibilities. It will be crucial for you to monitor and address security risks within CI/CD pipelines and infrastructure. Setting up observability tools like Prometheus, Grafana, Splunk, Datadog, etc., and implementing proactive alerting and incident response processes will be essential. In this role, you will take the lead in incident response and root cause analysis (RCA) activities. You will also play a key role in documenting DevOps processes, best practices, and system architectures. Additionally, you will be involved in evaluating and incorporating new DevOps tools and technologies. A significant aspect of your role will involve fostering a culture of continuous learning and sharing knowledge among team members.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

kolkata, west bengal

On-site

As a DevOps Engineer with 1-2 years of experience specializing in AWS, Git, and VPS management, your primary responsibility will be to automate deployments, manage cloud infrastructure, and optimize CI/CD pipelines for seamless development and operations. Key Responsibilities: - AWS Infrastructure Management: You will deploy, configure, and optimize AWS services such as EC2, S3, RDS, Lambda, etc. - Version Control & GitOps: Manage repositories, branching strategies, and workflows using Git/GitHub/GitLab. - VPS Administration: Configure, maintain, and optimize VPS servers for high availability and performance. - CI/CD Pipeline Development: Implement automated Git-based CI/CD workflows for smooth software releases. - Containerization & Orchestration: Deploy applications using Docker and Kubernetes. - Infrastructure as Code (IaC): Automate deployments using Terraform or CloudFormation. - Monitoring & Security: Implement logging, monitoring, and security best practices. Required Skills & Experience: - 1+ years of experience in AWS, Git, and VPS management. - Strong knowledge of AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.). - Expertise in Git and GitOps workflows. - Hands-on experience with VPS hosting, Nginx, Apache, and server management. - Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). - Knowledge of Infrastructure as Code (Terraform, CloudFormation). - Strong scripting skills (Bash, Python, or Go). Preferred Qualifications: - Experience with server security hardening on VPS servers. - Familiarity with AWS Lambda & Serverless architecture. - Knowledge of DevSecOps best practices. Don't forget to bring your updated resume and be in formal attire. This is a Full-time, Permanent, Contractual / Temporary job opportunity with Provident Fund benefits. The work schedule is during the day shift, and it requires in-person presence at the work location.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

We are seeking an experienced Cloud Solution Architect with over 10 years of experience to create and implement scalable, secure, and cost-efficient cloud solutions across various platforms such as AWS, Azure, GCP, Akamai, and on-premise environments. Your role will involve driving cloud transformation and modernization initiatives, particularly in Akamai Cloud and complex cloud migrations. Collaboration with clients, product teams, developers, and operations will be essential to translate business requirements into effective architectural solutions aligned with industry standards and best practices. Your responsibilities will include designing and deploying robust and cost-effective cloud architectures, leading end-to-end cloud migration projects, evaluating and recommending cloud products and tools, as well as developing Infrastructure as Code (IaC) best practices utilizing Terraform, CloudFormation, or ARM templates. You will work closely with development, security, and DevOps teams to establish cloud-native CI/CD workflows, integrate compliance and security measures into deployment plans, and provide technical leadership and mentorship to engineering teams. Additionally, maintaining clear architectural documentation and implementation guides will be a crucial aspect of this role. To be considered for this position, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 10 years of IT experience, with a minimum of 8 years in cloud architecture roles. Demonstrated expertise in cloud migrations and proficiency in one or more cloud platforms such as AWS, Azure, GCP, and Akamai are required. Strong knowledge of microservices, containerization (Docker, Kubernetes), cloud-native services (e.g., Lambda, Azure Functions, GKE), and IaC tools (Terraform, CloudFormation, ARM templates) is essential. A solid understanding of cloud networking, identity management, storage, security, as well as experience in cloud cost optimization and performance tuning are also necessary. Excellent English communication and presentation skills are expected in this role. Kindly review the complete Job Description and Requirements before submitting your application.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for leading a team of DevOps engineers in Ahmedabad. Your main duties will include managing and mentoring the team, overseeing the deployment and maintenance of various applications such as Odoo, Magento, and Node.js. You will also be in charge of designing and managing CI/CD pipelines using tools like Jenkins and GitLab CI, handling environment-specific configurations, and containerizing applications using Docker. In addition, you will need to implement and maintain Infrastructure as Code using tools like Terraform and Ansible, monitor application health and infrastructure, and ensure systems are secure, resilient, and compliant with industry standards. Collaboration with development, QA, and IT support teams is essential for seamless delivery, and troubleshooting performance, deployment, or scaling issues across tech stacks will also be part of your responsibilities. To be successful in this role, you should have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with a minimum of 2 years managing or leading DevOps teams. Hands-on experience with Odoo, Magento, Node.js, and AWS/Azure/GCP infrastructure is required. Strong scripting skills in Bash, Python, PHP, or Node CLI, as well as a deep understanding of Linux system administration and networking fundamentals, are essential. Experience with Git, SSH, reverse proxies, and load balancers is also necessary, along with good communication skills and client management exposure. Preferred certifications that would be highly valued for this role include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that are nice to have include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, ArgoCD, Helm, VAPT 2.0, WCAG compliance, and infrastructure security best practices.,

Posted 2 weeks ago

Apply

11.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

As an AI Azure Architect, your primary responsibility will be to develop the technical vision for AI systems that cater to the existing and future business requirements. This involves architecting end-to-end AI applications, ensuring seamless integration with legacy systems, enterprise data platforms, and microservices. Collaborating closely with business analysts and domain experts, you will translate business objectives into technical requirements and AI-driven solutions. Additionally, you will partner with product management to design agile project roadmaps aligning technical strategies with market needs. Coordinating with data engineering teams is essential to ensure smooth data flows, quality, and governance across different data sources. Your role will also involve leading the design of reference architectures, roadmaps, and best practices for AI applications. Evaluating emerging technologies and methodologies to recommend suitable innovations for integration into the organizational strategy is a crucial aspect of your responsibilities. You will be required to identify and define system components such as data ingestion pipelines, model training environments, CI/CD frameworks, and monitoring systems. Leveraging containerization (Docker, Kubernetes) and cloud services will streamline the deployment and scaling of AI systems. Implementation of robust versioning, rollback, and monitoring mechanisms to ensure system stability, reliability, and performance will be part of your duties. Moreover, you will oversee the planning, execution, and delivery of AI and ML applications, ensuring they are completed within budget and timeline constraints. Managing project goals, allocating resources, and mitigating risks will fall under your project management responsibilities. You will be responsible for overseeing the complete lifecycle of AI application developmentfrom conceptualization and design to development, testing, deployment, and post-production optimization. Emphasizing security best practices during each development phase, focusing on data privacy, user security, and risk mitigation, is crucial. In addition to technical skills, the ideal candidate for this role should possess key behavioral attributes such as the ability to mentor junior developers, take ownership of project deliverables, and contribute towards risk mitigation. Understanding business objectives and functions to support data needs is also essential. Mandatory technical skills for this position include a strong background in working with agents using langgraph, autogen, and CrewAI. Proficiency in Python, along with knowledge of machine learning libraries like TensorFlow, PyTorch, and Keras, is required. Experience with cloud computing platforms (AWS, Azure, Google Cloud Platform), containerization tools (Docker), orchestration frameworks (Kubernetes), and DevOps tools (Jenkins, GitLab CI/CD) is essential. Proficiency in SQL and NoSQL databases, designing distributed systems, RESTful APIs, GraphQL integrations, and event-driven architectures are also necessary. Preferred technical skills include experience with monitoring and logging tools, cutting-edge libraries like Hugging Face Transformers, and large-scale deployment of ML projects. Training and fine-tuning of Large Language Models (LLMs) is an added advantage. Educational qualifications for this role include a Bachelor's/Master's degree in Computer Science, along with certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification. The ideal candidate should have 11 to 14 years of relevant work experience in this field.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You will be responsible for migrating pipelines from Jenkins to GitLab CI/CD in order to ensure minimal disruption to development workflows. This will involve analyzing and converting Jenkins Groovy scripts into GitLab YAML pipelines, with the potential use of Python for scripting where necessary. Additionally, you will be tasked with optimizing and maintaining GitLab runners, configurations, and repository structures to enhance overall efficiency. Collaboration with development teams is a key aspect of this role, as you will work closely with them to facilitate the seamless adoption of GitLab CI/CD pipelines. Troubleshooting and resolving CI/CD failures, pipeline inefficiencies, and integration challenges will also be part of your responsibilities. You will be expected to implement GitLab security best practices, including code scanning, access controls, and compliance checks, as well as monitor pipeline performance to enhance build, test, and deployment efficiency. Support for containerized deployments using Docker and Kubernetes will be another area where your expertise is required. The ideal candidate should have a minimum of 5 years of experience in DevOps, CI/CD, and automation, with a strong proficiency in Jenkins (Groovy scripting) and GitLab CI/CD. Proficiency in Python for scripting and automation is essential, along with familiarity with IaC tools such as Terraform, ARM, or CloudFormation. Experience with Docker and Kubernetes for container orchestration, as well as hands-on experience with AWS or Azure, is highly desirable. Strong troubleshooting skills, the ability to work in a collaborative team environment, and certifications in GitLab, AWS/Azure, and Kubernetes are also advantageous.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

The Senior Software Developer role in Perth requires a candidate with good hands-on experience in developing React/Angular based applications. The ideal candidate should possess a strong understanding of AWS Cloud services and be capable of setting up, maintaining, and enhancing the cloud infrastructure for web applications. It is essential for the candidate to have expertise in core AWS services, along with the ability to implement security and scalability best practices. Furthermore, the candidate will be responsible for establishing the CI/CD pipeline using the AWS CI/CD stack and should have practical experience in BDD/TDD methodologies. Familiarity with serverless approaches utilizing AWS Lambda, as well as proficiency in writing infrastructure as code using tools like CloudFormation, is required. Additionally, experience with Docker and Kubernetes would be advantageous for this role. A solid understanding of security best practices, including the utilization of IAM Roles and KMS, is essential. The candidate should also have exposure to monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Moreover, the candidate should possess good knowledge of DevOps practices to effectively contribute to the development and deployment processes. If you have any queries regarding this role, please feel free to reach out.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Role Purpose Expertise as DevOps Architect with different cloud solutions. Should have strong experience on Kubernetes. Hands on experience with Cloud service e.g. (AWS Services like VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, CloudFront, CloudWatch, SQS/SNS, Lambda) Experience with Infrastructure as Code Tooling Terraform, CloudFormation Hands on skills in scripting or coding with modern languages Python, Unix bash scripting Experience with Configuration management tooling (Ansible/Chef, Puppet good to have.) Strong Docker and Kubernetes skills desirable Some experience with windows or unix administration Experience with Continuous Integration and Continuous Deployment Pipelines and tooling (GitLab, Jenkins, Github, Jira, or related) Should be able to propose new solutions for CI/CD implementation Should be able to develop overall strategy for Build & Release management Do Develop architectural solutions for the new deals/ major change requests in existing deals Creates an enterprise-wide architecture that ensures systems are scalable, reliable, and manageable. Provide solutioning of RFPs received from clients and ensure overall design assurance Develop a direction to manage the portfolio of to-be-solutions including systems, shared infrastructure services, applications in order to better match business outcome objectives Analyse technology environment, enterprise specifics, client requirements to set a collaboration solution design framework/ architecture Provide technical leadership to the design, development and implementation of custom solutions through thoughtful use of modern technology Define and understand current state solutions and identify improvements, options & tradeoffs to define target state solutions Clearly articulate, document and sell architectural targets, recommendations and reusable patterns and accordingly propose investment roadmaps Evaluate and recommend solutions to integrate with overall technology ecosystem Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution Perform detailed documentation (App view, multiple sections & views) of the architectural design and solution mentioning all the artefacts in detail Validate the solution/ prototype from technology, cost structure and customer differentiation point of view Identify problem areas and perform root cause analysis of architectural design and solutions and provide relevant solutions to the problem Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture Tracks industry and application trends and relates these to planning current and future IT needs Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendation Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture Identifies implementation risks and potential impacts Enable Delivery Teams by providing optimal delivery solutions/ frameworks Build and maintain relationships with executives, technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor Develops and establishes relevant technical, business process and overall support metrics (KPI/SLA) to drive results Manages multiple projects and accurately reports the status of all major assignments while adhering to all project management standards Identify technical, process, structural risks and prepare a risk mitigation plan for all the projects Ensure quality assurance of all the architecture or design decisions and provides technical mitigation support to the delivery teams Recommend tools for reuse, automation for improved productivity and reduced cycle times Leads the development and maintenance of enterprise framework and related artefacts Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams Ensures architecture principles and standards are consistently applied to all the projects Ensure optimal Client Engagement Support pre-sales team while presenting the entire solution design and its principles to the client Negotiate, manage and coordinate with the client teams to ensure all requirements are met and create an impact of solution proposed Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team.

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

haryana

On-site

As a Network Specialist II at OSTTRA India, you will be a vital member of the Technology team responsible for building, supporting, and protecting the applications that operate the network. The team comprises Capital Markets Technology professionals who work on high-performance, high-volume applications utilizing contemporary microservices and cloud-based architectures. Your role will involve collaborating with colleagues globally to manage high-performance platforms processing over 100 million messages daily, ensuring efficient trade processing and effective global capital market operation. We are seeking highly motivated technology professionals with 10-15 years of experience in Network Engineering to join our team based in Gurgaon. This is an excellent opportunity to work on a diverse portfolio of Financial Services clients and strengthen your expertise in network infrastructure. Responsibilities: - Being part of the global network infrastructure team responsible for the office, data center, and cloud network infrastructure at OSTTRA. - Involvement in all aspects of the network infrastructure lifecycles including design, implementation, and maintenance. Requirements: - Degree in Computer Science or related field, or equivalent knowledge and work experience. - Minimum of 5 years experience in network operations and architecture. - Proficiency in network security, firewalls, VPNs, and IDS/IPS solutions. - Extensive experience with protocols such as BGP and MPLS. - Familiarity with Juniper, Palo-Alto Networks, F5, Arista. - Self-motivated individual capable of working well under pressure. - Understanding of networking concepts in virtual environments and hybrid cloud initiatives. - Strong communication skills, both verbal and written. - Experience in configuration management, change management, and network automation tools. - Working knowledge of AWS services like EC2, TGW, ALB, VGW, VPC, Direct-Connect, ELB, CloudFormation. - Proficiency in network automation tools including Terraform, Ansible, Git, and Python. - Troubleshooting skills in AWS environment. - Experience with Docker, Kubernetes, Linux operating systems. - Experience with data centers in the US & UK. - Previous experience in the financial industry. Location: Gurgaon, India About OSTTRA: OSTTRA is a market leader in derivatives post-trade processing, offering innovative solutions to overcome the challenges in global financial markets. The company operates cross-asset post-trade processing networks, providing services like Credit Risk, Trade Workflow, and Optimization to streamline post-trade workflows and enhance operational efficiencies. Joining OSTTRA means being part of a team dedicated to developing critical market infrastructure and supporting global financial markets. As an independent firm jointly owned by S&P Global and CME Group, OSTTRA offers a unique opportunity to work with post-trade experts globally. At OSTTRA, you can expect a supportive work environment that values your well-being and career growth. We offer a range of benefits including healthcare coverage, generous time off, continuous learning resources, family-friendly perks, and more. If you are looking to be part of a collaborative, respectful, and inclusive company that values your expertise, OSTTRA is the place for you. Learn more at www.osttra.com.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across various industries. From Formula 1 cars to skyscrapers, ships to space exploration vehicles, our Product Lifecycle Management (PLM) software plays a crucial role in bringing these innovations to life. We are currently looking for a proactive and results-driven Cloud DevOps Release Manager to join our dynamic team within the Siemens Experience and Platform Engineering organization. As the Cloud DevOps Release Manager, you will take on the responsibility of overseeing end-to-end accountability for releases, ensuring adherence to release criteria, and establishing robust processes to enhance quality and reliability. Your role will involve working closely with the application development, testing, and production teams to coordinate software releases effectively. In this position, you will actively contribute to agile transformation and release train engineering activities as part of the Cloud Operations Programs and Process organization. Your core responsibilities will include planning and coordinating releases with various teams, defining clear release scopes and schedules, and implementing necessary changes to comply with policies. Your leadership skills and hands-on approach will be essential in driving efficiency, maintaining high standards of quality, and fostering collaboration across different teams. Key Roles and Responsibilities: Release Planning and Coordination: - Drive release planning with development, QA, and operations teams to ensure alignment and successful releases. - Collaborate with teams to plan and coordinate continuous software releases. - Define clear release scope, schedule, and dependencies for smooth deployments. - Participate in Technical Change Advisory and Review boards and submit change records as needed. Release Execution and Enforcement: - Ensure adherence to change, testing, and deployment policies before approving production releases. - Oversee planned downtimes and maintain operational reliability standards. - Conduct root cause analysis for release outages and drive corrective actions. Release Automation & Environment Management: - Champion CI/CD standard methodologies for efficient, automated deployments. - Manage version control repositories and implement infrastructure as code practices. - Lead development, testing, staging, and production environments to ensure consistency. Quality Assurance & Continuous Improvement: - Establish quality gates and drive continuous improvement initiatives. - Analyze past releases to identify areas for improvement and optimize release efficiency. Communication & Stakeholder Management: - Act as the central point of accountability for release readiness and execution. - Provide real-time transparency into release status, risks, and mitigation plans. - Ensure clear and timely communication of release schedules and changes. Incident Management: - Collaborate with SRE teams to address post-release incidents and contribute to rapid resolution. Required Qualifications: - Degree in Computer Science, Information Technology, Software Engineering, or related fields. - 5-8 years of experience as a DevOps Release Manager in a fast-paced software development environment. - Proficiency in DevOps practices, CI/CD tools, version control systems, and infrastructure as code tools. - Strong understanding of Agile methodologies and effective leadership skills. - Excellent communication, problem-solving, and adaptability skills. Preferred Qualifications: - 8-10 years of experience with containerization and orchestration technologies. - Relevant certifications in DevOps or related fields. - SAFE Agile RTE certification. At Siemens, we are a diverse and inclusive community of over 377,000 individuals working towards building a better future. We value equality and welcome applications from all backgrounds. Join us in shaping tomorrow and unlock a rewarding career with competitive benefits. Become a part of Siemens Software and transform the everyday with us.,

Posted 3 weeks ago

Apply

10.0 - 20.0 years

25 - 40 Lacs

Hyderabad

Hybrid

Role & responsibilities : We are seeking dynamic individuals to join our team as individual contributors, collaborating closely with stakeholders to drive impactful results. Working hours - 5:30 pm to 1:30 am (Hybrid model) Must have Skills* 1. 15 years of experience in design and delivery of Distributed Systems capable of handling petabytes of data in a distributed environment. 2. 10 years of experience in the development of Data Lakes with Data Ingestion from disparate data sources, including relational databases, flat files, APIs, and streaming data. 3. Experience in providing Design and development of Data Platforms and data ingestion from disparate data sources into the cloud. 4. Expertise in core AWS Services including AWS IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail, CloudWatch. 5. Proficiency in programming languages like Python and PySpark to ensure efficient data processing. preferably Python. 6. Architect and implement robust ETL pipelines using AWS Glue, Lamda, and step-functions defining data extraction methods, transformation logic, and data loading procedures across different data sources 7. Experience in the development of Event-Driven Distributed Systems in the Cloud using Serverless Architecture. 8. Ability to work with Infrastructure team for AWS service provisioning for databases, services, network design, IAM roles and AWS cluster. 9. 2-3 years of experience working with Document DB or MongoDB environment. Nice to have Skills: 1. 10 years of experience in the development of Data Audit, Compliance and Retention standards for Data Governance, and automation of the governance processes. 2. Experience in data modelling with NoSQL Databases like Document DB. 3. Experience in using column-oriented data file format like Apache Parquet, and Apache Iceberg as the table format for analytical datasets. 4. Expertise in development of Retrieval-Augmented Generation (RAG) and Agentic Workflows for providing context to LLMs based on proprietary enterprise data. 5. Ability to develop re-ranking strategies using results from Index and Vector stores for LLMs to improve the quality of the output. 6. Knowledge of AWS AI Services like AWS Entity Resolution, AWS Comprehend.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Hi Pleasae Find JD and send me your updated Resume Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, DevOps, or ML Engineering roles. Strong experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in Python and experience working with ML libraries like TensorFlow, PyTorch, or scikit-learn. Familiarity with ML pipeline tools such as MLflow, Kubeflow, TFX, Airflow, or SageMaker Pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools (Terraform, CloudFormation). Solid understanding of CI/CD principles, especially as applied to machine learning workflows. Nice-to-Have Experience with feature stores, model registries, and metadata tracking. Familiarity with data versioning tools like DVC or LakeFS. Exposure to data observability and monitoring tools. Knowledge of responsible AI practices including fairness, bias detection, and explainability.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is for an individual contributor, and as such, the candidate is expected to autonomously manage client communication and proactively resolve technical issues without external assistance.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

You are a skilled Senior AWS DevOps Engineer with 5 to 8 years of experience in DevOps, cloud computing, and infrastructure engineering. You will play a crucial role in our team by leveraging your expertise in AWS cloud services, infrastructure automation, CI/CD pipelines, and security best practices to design, implement, and manage scalable, secure, and reliable cloud-based solutions. Your responsibilities will include architecting, building, and maintaining highly scalable AWS infrastructure, managing CI/CD pipelines using tools like Jenkins, Bitbucket, or AWS CodePipeline, and developing Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK. You will automate deployment, monitoring, and scaling of applications and infrastructure while optimizing cloud costs and performance through effective resource management and scaling strategies. As a Senior AWS DevOps Engineer, you will also manage Kubernetes clusters (EKS) and containerized applications using Docker, monitor system performance, troubleshoot issues, and enforce security best practices such as IAM policies, network security, and compliance with industry standards. Collaboration with developers, architects, and security teams will be essential to enhance DevOps best practices and drive continuous improvement in deployment efficiency and system resilience. To excel in this role, you should possess expertise in AWS services like EC2, S3, Lambda, RDS, IAM, VPC, CloudWatch, ECS, and EKS, proficiency in IaC tools, strong knowledge of Kubernetes and container orchestration, and proficiency in scripting and automation using languages like Python, Bash, or Go. Experience with CI/CD pipelines, monitoring and logging tools, networking, security best practices, IAM policies, and configuration management tools will be beneficial. Experience in Agile/Scrum development environments and AWS certifications such as AWS Certified DevOps Engineer Professional are preferred qualifications. Knowledge of serverless architectures and Service Mesh architectures will also be advantageous for this role. If you are a proactive problem solver with a passion for optimizing cloud performance and cost, we look forward to welcoming you to our team as our Senior AWS DevOps Engineer.,

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies