Jobs
Interviews

425 Autoscaling Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Qualifications Experience applying continuous integration/continuous delivery best practices, including Version Control, Trunk Based Development, Release Management, and Test-Driven Development Experience with popular MLOps tools (e.g., Domino Data Labs, Dataiku, mlflow, AzureML, Sagemaker), and frameworks (e.g.: TensorFlow, Keras, Theano, PyTorch, Caffe, etc.) Experience with LLM platforms (OpenAI, Bedrock, NVAIE) and frameworks (LangChain, LangFuse, vLLM, etc.) Experience in programming languages common to data science such as Python, SQL, etc. Understanding of LLMs, and supporting concepts (tokenization, guardrails, chunking, Retrieval Augmented Generation, etc.). Knowledge of ML lifecycle (wrangling data, model selection, model training, modeling validation and deployment at scale) and experience working with data scientists Familiar with at least one major cloud provider (Azure, AWS, GCP), including resource provisioning, connectivity, security, autoscaling, IaC. Familiar with cloud data warehousing solutions such as Snowflake, Fabric, etc. Experience with Agile and DevOps software development principles/methodologies and working on teams focused on delivering business value. Experience influencing and building mindshare convincingly with any audience. Confident and experienced in public speaking. Ability to communicate complex ideas in a concise way. Fluent with popular diagraming and presentation software. Demonstrated experience in teaching and/or mentoring professionals. Want to learn more about SC&E Check us out on our platform: http://www.wwt.com/consulting-services-careers

Posted 1 month ago

Apply

12.0 years

0 Lacs

India

On-site

Machine Learning Engineer Why WWT? At World Wide Technology, we work together to make a new world happen. Our important work benefits our clients and partners as much as it does our people and communities across the globe. WWT is dedicated to achieving its mission of creating a profitable growth company that is also a Great Place to Work for All. We achieve this through our world-class culture, generous benefits and by delivering cutting-edge technology solutions for our clients. WWT was founded in 1990 in St. Louis, Missouri. We employ more than 10,000 people globally and closed nearly $20 billion in revenue in 2023. We have an inclusive culture and believe our core values are the key to company and employee success. WWT is proud to have been included on the FORTUNE "100 Best Places to Work For®" list 12 years in a row! Want to work with highly motivated individuals on high-performance teams? Join WWT today! What is the Solutions Consulting & Engineering (SC&E) Team and why join? Solutions Consulting & Engineering is an organization that is Customer Focused and Solutions Led. We deliver end-to-end (E2E) and emerging solutions to drive customer satisfaction, increase profitability and growth. Our success is enabled by our world-class management consulting, delivery excellence and engineering brilliance. We embody the OneWWT mindset by bringing the right talent at the right time from anywhere within WWT to solve our customer's problems. Our goal is to bring together business acumen with full-stack technical know-how to develop innovative solutions for our clients' most complex challenges. RESPONSIBILITIES: Develop, productionize, and deploy scalable, resilient software solutions for operationalizing AI & ML. Deploy Machine Learning (ML) models and Large Language Models (LLM) securely and efficiently, both in the cloud and on-premises, using state of the art platforms, tools, and techniques. Provide effective model observability, monitoring, and metrics by instrumenting logging, dashboards, alerts, etc. In collaboration with Data Engineers, design and build pipelines for extraction, transformation, and loading of data from a variety of data sources for AI & ML models as well as RAG architectures for LLMs. Enable Data Scientists to work more efficiently by providing tools for experiment tracking and test automation. Ensure scalability of built solutions by developing and running rigorous load tests. Facilitate integration of AI & ML capabilities into user experience by building APIs, UIs, etc. Stay current on new developments in AI & ML frameworks, tools, techniques, and architectures available for solution development, both private and open source. Coach data scientists and data engineers on software development best practices to write scalable, maintainable, well-designed code. Agile Project Work Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate AI and MLOps solutions. Act as a positive champion for broader organization to develop stronger understanding of software design patterns that deliver scalable, maintainable, well-designed analytics solutions. Advocate for security and responsibility best practices and tools. Acts as an expert on complex technical topics that require cross-functional consultation. Perform other duties as required. , QUALIFICATIONS: Experience applying continuous integration/continuous delivery best practices, including Version Control, Trunk Based Development, Release Management, and Test-Driven Development Experience with popular MLOps tools (e.g., Domino Data Labs, Dataiku, mlflow, AzureML, Sagemaker), and frameworks (e.g.: TensorFlow, Keras, Theano, PyTorch, Caffe, etc.) Experience with LLM platforms (OpenAI, Bedrock, NVAIE) and frameworks (LangChain, LangFuse, vLLM, etc.) Experience in programming languages common to data science such as Python, SQL, etc. Understanding of LLMs, and supporting concepts (tokenization, guardrails, chunking, Retrieval Augmented Generation, etc.). Knowledge of ML lifecycle (wrangling data, model selection, model training, modeling validation and deployment at scale) and experience working with data scientists Familiar with at least one major cloud provider (Azure, AWS, GCP), including resource provisioning, connectivity, security, autoscaling, IaC. Familiar with cloud data warehousing solutions such as Snowflake, Fabric, etc. Experience with Agile and DevOps software development principles/methodologies and working on teams focused on delivering business value. Experience influencing and building mindshare convincingly with any audience. Confident and experienced in public speaking. Ability to communicate complex ideas in a concise way. Fluent with popular diagraming and presentation software. Demonstrated experience in teaching and/or mentoring professionals. Want to learn more about SC&E Check us out on our platform: http://www.wwt.com/consulting-services-careers

Posted 1 month ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Head of Architecture and Technology (Hands-On, High-Ownership) Company: Elysium PTE. LTD. Location: Chennai, Tamil Nadu — at office Employment Type: Full-time, permanent Compensation: ₹15 L fixed CTC + up to 5 % ESOP (performance-linked vesting, 4-year schedule with 1-year cliff) Reports to: Founding Team ________________________________________ About Elysium Elysium is a founder-led studio headquartered in Singapore with its delivery hub in Chennai. We are currently building a global gaming-based mar-tech platform while running a premium digital-services practice (branding, immersive web, SaaS MVPs, AI-powered solutions). We thrive on speed, experimentation and shared ownership. ________________________________________ The opportunity We’re looking for a hungry technologist who can work in an early-stage start-up along with the founders to build ambitious global products & services. You’ll code hands-on every week, shape product architecture, and grow a lean engineering pod—owning both our flagship product and client deliveries. ________________________________________ What you will achieve in your first 12 months • Co-ordinate & develop the In-house products with internal & external teams. • Build and mentor a six-to-eight-person engineering/design squad that hits ≥ 85 % on time delivery for IT-service clients. • Cut mean time-to-deployment to under 30 minutes through automated CI/CD and Infrastructure-as-Code. • Implement GDPR-ready data flows and a zero-trust security baseline across all projects. • Publish quarterly tech radars and internal playbooks that keep the team learning and shipping fast. ________________________________________ Day-to-day responsibilities • Resource management & planning using the internal & external teams with respect to our products & client deliveries. • Pair-program and review pull requests to enforce clean, testable code. • Translate product/user stories into domain models, sprint plans and staffing forecasts. • Design cloud architecture (AWS / GCP) that balances cost and scale; own IaC, monitoring and on-call until an SRE is hired. • Evaluate and manage specialist vendors for parts of the flagship app; hold them accountable on quality and deadlines. • Scope and pitch technical solutions in client calls; draft SoWs and high-level estimates with founders. • Coach developers and designers, set engineering KPIs, run retrospectives and post-mortems. • Prepare technical artefacts for future fundraising and participate in VC diligence. ________________________________________ Must-have Requirements • 5 – 8 years modern full-stack development with at least one product shipped to >10 k MAU or comparable B2B scale. • Expert knowledge of modern full-stack ecosystems: Node.js or Python or Go; React/Next.js; distributed data stores (PostgreSQL, DynamoDB, Redis, Kafka or similar). • Deep familiarity with AWS, GCP or Azure, including cost-optimized design, autoscaling, serverless patterns, container orchestration and IaC tools such as Terraform or CDK. • Demonstrated ownership of DevSecOps practices: CI/CD, automated testing matrices, vulnerability scanning, SRE dashboards and incident post-mortems. • Excellent communication skills, able to explain complex trade-offs to founders, designers, marketers and non-technical investors. • Hunger to learn, ship fast, and own meaningful equity in lieu of a senior-corporate pay check. ________________________________________ Nice-to-have extras • Prior work in fintech, ad-tech or loyalty. • Experience with WebGL/Three.js, real-time event streaming (Kafka, Kinesis), LLM pipelines & Blockchain. • Exposure to seed- or Series-A fundraising, investor tech diligence or small-team leadership. ________________________________________ What we offer • ESOP of up to 5 % on a 4-year vest (1-year cliff) with performance accelerators tied to product milestones. • Direct influence on tech stack, culture and product direction—your code and decisions will shape the company’s valuation. • A team that values curiosity, transparency and shipping beautiful work at start-up speed. ________________________________________

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At Franklin Templeton, we’re driving our industry forward by developing new and innovative ways to help our clients achieve their investment goals. Our dynamic and diversified firm spans asset management, wealth management, and fintech, offering many ways to help investors make progress toward their goals. Our talented teams working around the globe bring expertise that’s both broad and unique. From our welcoming, inclusive, and flexible culture to our global and diverse business, we offer opportunities not only to help you reach your potential but also to contribute to our clients’ achievements. Come join us in delivering better outcomes for our clients around the world! What is the Senior Software Engineer responsible for? The FTT AI & Digital Transformation group is a newly established team within Franklin Templeton Technologies, the Technology function within Franklin Templeton Investments. The core mandate of this role is to bring innovative digital investment products and solutions to market leveraging a patented and innovative digital wealth tech/fintech product - Goals Optimization Engine (GOE) - built with several years of academic research in mathematical optimization, probability theory and AI techniques at its core. The mandate also extends to leveraging cutting edge AI such as Generative AI in addition to Reactive AI to create value within various business functions within Franklin Templeton such as Investment Solutions, Portfolio Management, Sales & Distribution, Marketing, HR functions among others in a responsible and appropriate manner. The possibilities are limitless here and this would be a fantastic opportunity for self-motivated and driven professionals to make significant contributions to the organization and to themselves. What are the ongoing responsibilities of Senior Software Engineer? Senior Software Engineer provides expertise and experience in application development and production support activities to support business needs: Architect, build, and optimize back-end systems, APIs, and databases to support seamless front-end interactions. Write clean, efficient, and maintainable code with strong documentation across the stack. Integrate AI-assisted development workflows using tools like GitHub Copilot to accelerate delivery.Collaborate closely with product managers, designers, and other developers to deliver high-quality features. Engage in user acceptance testing (UAT) and support test execution with analysts and stakeholders. Build and deploy back-end services in Python using frameworks like Django or Flask. Ensure application security, performance, and scalability through robust testing and peer code reviews. Build for scalability, observability, and resilience in a multi-tenant, white-label setup. Debug and troubleshoot issues across the entire stack, from the database to the front-end. Participate in sprint planning, backlog grooming, and release planning to deliver high-quality features on time. Stay current with industry trends, tools, and best practices to continuously improve development processes. Conduct peer code reviews, static code analysis, and performance tuning to maintain high development standards. Adaptable to ambiguity and rapidly evolving conditions, viewing changes as opportunities to introduce structure and order when appropriate Reviews source code and design of peers incorporating advanced business domain knowledge. Offers vocal involvement in design and implementation discussions. Provides alternate views on software and product design characteristics to strengthen final decisions. Participates in defining the technology roadmap. What ideal qualifications, skills & experience would help someone to be successful? Education And Experience At least 8+ years of experience in software development. A bachelor's degree in computer science, Engineering, or related fields. Candidates from Tier 1 or Tier 2 institutions in India (e.g., IITs, BITS Pilani, IIITs, NITs, etc.) are strongly preferred. Strong understanding of RESTful API design and development Extensive experience building back-end services using Python (Django, Flask). Familiarity with message brokers and event-driven architecture (e.g., Kafka) Familiarity with Node.js and other back-end frameworks as a bonus. Familiarity with Karpenter for dynamic Kubernetes cluster autoscaling and optimizing compute resource utilization Familiarity with Datadog or Kibana for application monitoring, alerting, and observability dashboard for diagnosing performance bottlenecks using telemetry data Experience working with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes). Experience with integrating observability tools into CI/CD pipelines and production environment Proficiency in databases, both relational (PostgreSQL, MySQL) and NoSQL (MongoDB). Proficiency in writing unit test cases Strong understanding of API development, authentication, and security protocols such as OAuth and JWT. Hands-on experience with DevOps practices and CI/CD pipelines. Strong proficiency in using AI tools such as GitHub Copilot. Excellent analytical and problem-solving skills with a proactive, solution-oriented mindset. Strong communication and collaboration abilities in team environments. A passion for building user-centric, reliable, and scalable applications. Bonus: Experience with CMS-integrated backends or regulated industries (finance, healthcare, etc.) Job Level - Individual Contributor Work Shift Timings - 2:00 PM - 11:00 PM IST Experience our welcoming culture and reach your professional and personal potential! Our culture is shaped by our diverse global workforce and strongly held core values. Regardless of your interests, lifestyle, or background, there’s a place for you at Franklin Templeton. We provide employees with the tools, resources, and learning opportunities to help them excel in their career and personal life. Hear more from our employees By joining us, you will become part of a culture that focuses on employee well-being and provides multidimensional support for a positive and healthy lifestyle. We understand that benefits are at the core of employee well-being and may vary depending on individual needs. Whether you need support for maintaining your physical and mental health, saving for life’s adventures, taking care of your family members, or making a positive impact in your community, we aim to have them covered. Highlights Of Our Benefits Include Professional development growth opportunities through in-house classes and over 150 Web-based training courses An educational assistance program to financially help employees seeking continuing education Medical, Life and Personal Accident Insurance benefit for employees. Medical insurance also cover employee’s dependents (spouses, children and dependent parents) Life insurance for protection of employees’ families Personal accident insurance for protection of employees and their families Personal loan assistance Employee Stock Investment Plan (ESIP) 12 weeks Paternity leave Onsite fitness center, recreation center, and cafeteria Transport facility Child day care facility for women employees Cricket grounds and gymnasium Library Health Center with doctor availability HDFC ATM on the campus Learn more about the wide range of benefits we offer at Franklin Templeton Franklin Templeton is an Equal Opportunity Employer. We are committed to providing equal employment opportunities to all applicants and existing employees, and we evaluate qualified applicants without regard to ancestry, age, color, disability, genetic information, gender, gender identity, or gender expression, marital status, medical condition, military or veteran status, national origin, race, religion, sex, sexual orientation, and any other basis protected by federal, state, or local law, ordinance, or regulation. Franklin Templeton is committed to fostering a diverse and inclusive environment. If you believe that you need an accommodation or adjustment to search for or apply for one of our positions, please send an email to accommodations@franklintempleton.com. In your email, please include the accommodation or adjustment you are requesting, the job title, and the job number you are applying for. It may take up to three business days to receive a response to your request. Please note that only accommodation requests will receive a response.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Acquia empowers the world’s most ambitious brands to create digital customer experiences that matter. With open source Drupal at its core, the Acquia Digital Experience Platform (DXP) enables marketers, developers, and IT operations teams at thousands of global organizations to rapidly compose and deploy digital products and services that engage customers, enhance conversions, and help businesses stand out. Headquartered in the U.S., Acquia is a Great Place to Work-CertifiedTM company in India, is listed as one of the world’s top software companies by The Software Report, and is positioned as a market leader by the analyst community. We are Acquia. We are building for the future and we want you to be a part of it! Acquia runs one of the world's largest Platform as a Service (PaaS) offerings. Our Drupal optimized cloud runs on over 18,000 AWS instances and delivers billions of page views per month, running some of the largest and most mission-critical websites in the world. We are seeking exceptional professionals who desire to deliver world-class performance and reliability while building powerful tools that enable our customers effortlessly to scale their web applications. At Acquia, we are obsessive about providing our customers with security, availability, and scalability that is second to none and are looking for engineers who are equally passionate. Acquia’s products run 100% on Amazon Web Services using EC2, CloudFormation, and various other technologies and best practices. Since each product is built and maintained by its own engineering team, the ideal candidate for this position would need to be proactive in familiarizing themselves with those services and have the ability to coordinate and collaborate with multiple teams. As the Senior Software Engineer, you will... Participate in designing and implementing solutions for modernizing Acquia infrastructure and drive adoption of Kubernetes and Cloud-Native Technologies Design and implement end-to-end container management solution with Kubernetes, Docker Develop secure and performant, world-class modern APIs and Workflows Design and develop Go based Kubernetes operators using the kube-builder SDK Set up Kubernetes as a platform with enterprise-level reliability, availability, scalability and performance requirements Debug technical issues inside a very deep and complex technical stack involving containers, microservices, AWS services across the different layers of a web stack. Work with other teams in deciding, developing integrations with other subsystems. Provide product support to internal and external stakeholders. Contribute to system architecture discussions, lead projects, mentor junior team members, and deliver high quality, tested code. Evaluate new technologies and provide a recommendation to management including planning and execution of proof of concept activities. What You’ll Need To Be Successful… 5+ years of experience in design and Software Development Background and over 2 years of experience in working with Containers and Cloud-Native Development Proficient with Kubernetes/Swarm architecture with hands-on production experience with container technologies and the tools and challenges around them Experience in developing applications using programming languages such as Go, Python, Ruby and shell scripting Proficient with object oriented programming and microservices design patterns Proficient with service discovery, networking in Kubernetes or equivalent, monitoring, logging, scheduling Experience working with AWS services such as - EC2, EBS, ALB, EKS, VPC, S3, WAF etc. Knowledge of CI/CD tools like Jenkins (preferred), Bamboo, Gitlab Experience working with configuration management tools such as Ansible, Terraform, Puppet and Cloudformation Experience operating with TCP/IP, load balancing, security and operating production environments Strong Knowledge on the network layers, varnish and nginx Strong oral and written communication skills Strong team collaboration and leadership skills Familiarity with Agile processes (Kanban, Scrum, etc.) Extra credit if you have… Experience working with any CDN such as - CloudFront, CloudFlare etc. Experience in monitoring and observability with Sumologic/Prometheus/Grafana. Experience working with helm, ingress, cert-manager, autoscaling, external-dns and logging operator in Kubernetes AWS/CKAD professional certification. Qualifications: BS in Computer Science (preferred), or a comparable field of study, or equivalent practical experience We are an organization that embraces innovation and the potential of AI to enhance our processes and improve our work. We are always looking for individuals who are open to learning new technologies and collaborating with AI tools to achieve our goals. Acquia is proud to provide best-in-class benefits to help our employees and their families maintain a healthy body and mind. Core Benefits include: competitive healthcare coverage, wellness programs, take it when you need it time off, parental leave, recognition programs, and much more! Individuals seeking employment at Acquia are considered without regard to race, color, religion, caste, creed, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. Whatever you answer will not be considered in the hiring process or thereafter.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Kubernetes, DevOps Good to have skills : Microsoft Azure Container Infrastructure Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Infra Tech Support Practitioner, you will provide ongoing technical support and maintenance for production and development systems and software products. You will be responsible for configuring services on various platforms and implementing technology at the operating system-level. Your role will involve troubleshooting at both basic and intermediate levels, ensuring smooth operations and resolving any issues that arise. Key Responsibilities: A: Implementation, troubleshooting, monitoring, management of on-Prem and Cloud IAAS including AKS Cluster Provisioning and Configuration: Deploy and set up AKS clusters with appropriate configurations using Terraform and Azure DevOps B: Identify and resolve issues related to Kubernetes, AKS clusters and related components like Istio, GitOps, Autoscaling, Helm, Kustomize, etc C: Upgrades and Patches: Plan and perform AKS upgrades and apply security patches to keep the clusters up-to-date and secure Technical Experience: A: Should have in-depth knowledge on AKS concepts like Node Pools, AKS RBAC, AKS upgradation, Node OS upgradation, AKS Monitoring, AKS Logging, Workload Identity etc, Azure DevOps Working knowledge of Azure DevOps , which includes Azure Repo, branching strategy, pull requests, and git commands B: Must know how to use Azure DevOps through IDE tool like VS Code Proficiency in using Azure services, especially Azure Kubernetes Service AKS Should have good knowledge on Azure Services like VNet, Subnet C: Good knowledge on Managed Identity, Role Assignment, Key Vault, Storage Account, and Terraform Professional Attributes: A: A process-oriented individual, with strong interpersonal skills, who has the flexibility to thrive in a fast-paced, dynamic organization B: Able to lead the discussion and front end during the complex discussion C: Must be able to work a flexible schedule, including overtime and after-hours Educational Qualification: BCA in Computers, BSc in Any Specialization, BTech/BE in Any Specialization Additional Information: Should be comfortable working in 24x7 regime with on-call support, 15 years full time education

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title: Cloud Engineer, AS Location: Pune, India Role Description A Google Cloud Platform (GCP) Engineer is responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud. Here’s a detailed role description in points: The Platform Engineering Team is responsible for building and maintaining the foundational infrastructure, tooling, and automation that enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. Design and manage scalable, secure, and cost-effective cloud infrastructure (GCP, AWS, Azure). Implement Infrastructure as Code (IaC) using Terraform Implement security best practices for IAM, networking, encryption, and secrets management. Ensure regulatory compliance (SOC 2, ISO 27001, PCI-DSS) by automating security checks. Manage API gateways, service meshes, and secure service-to-service communication.. Enable efficient workload orchestration using Kubernetes, serverless What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Cloud Infrastructure Management – Design, deploy, and manage scalable, secure, and cost-effective cloud environments on GCP. Automation & Scripting – Develop Infrastructure as Code (IaC) using Terraform, Deployment Manager, or other tools. Security & Compliance – Implement security best practices, IAM policies, and ensure compliance with organizational and regulatory standards. Networking & Connectivity – Configure and manage VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking. CI/CD & DevOps – Set up CI/CD pipelines using Cloud Build, Jenkins, GitHub Actions, or similar tools for automated deployments. Monitoring & Logging – Implement monitoring and alerting using Stackdriver (Cloud Operations), Prometheus, or third-party tools. Cost Optimization – Analyze and optimize cloud spending by leveraging committed use discounts, autoscaling, and right-sizing resources. Disaster Recovery & Backup – Design backup, high availability, and disaster recovery strategies using Cloud Storage, Snapshots, and multi-region deployments. Database Management – Deploy and manage GCP databases like Cloud SQL, BigQuery, Firestore, and Spanner. Containerization & Kubernetes – Deploy and manage containerized applications using GKE (Google Kubernetes Engine) and Cloud Run. Your Skills And Experience Strong experience with GCP services like Compute Engine, Cloud Storage, IAM, Networking, Kubernetes, and Serverless technologies. Proficiency in scripting (Python, Bash) and Infrastructure as Code (Terraform, CloudFormation). Knowledge of DevOps practices, CI/CD tools, and GitOps workflows. Understanding of security, IAM, networking, and compliance in cloud environments. Experience with monitoring tools like Stackdriver, Prometheus, or Datadog. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a DevOps Engineer This role is Office Based, Pune Office We are looking for a skilled DevOps Engineer with hands-on experience in Kubernetes, CI/CD pipelines, cloud infrastructure (AWS/GCP), and observability tooling. You will be responsible for automating deployments, maintaining infrastructure as code, and optimizing system reliability, performance, and scalability across environments. In this role, you will… Develop and maintain CI/CD pipelines to automate testing, deployments, and rollbacks across multiple environments. Manage and troubleshoot Kubernetes clusters (EKS, AKS, GKE) including networking, autoscaling, and application deployments. Collaborate with development and QA teams to streamline code integration, testing, and deployment workflows. Automate infrastructure provisioning using tools like Terraform and Helm. Monitor and improve system performance using tools like Prometheus, Grafana, and the ELK stack. Set up and maintain Kibana dashboards, and ensure high availability of logging and monitoring systems. Manage cloud infrastructure on AWS and GCP, optimizing for performance, reliability, and cost. Build unified observability pipelines by integrating metrics, logs, and traces. Participate in on-shift rotations, handling incident response and root cause analysis, and continuously improve automation and observability. Write scripts and tools in Bash, Python, or Go to automate routine tasks and improve deployment efficiency. You’ve Got What It Takes If You Have… 3+ years of experience in a DevOps, SRE, or Infrastructure Engineering role. Bachelor's degree in Computer Science, IT, or related field. Strong understanding of Linux systems, cloud platforms (AWS/GCP), and containerized microservices. Proficiency with Kubernetes, CI/CD systems, and infrastructure automation. Experience with monitoring/logging tools: Prometheus, Grafana, InfluxDB ELK stack (Elasticsearch, Logstash, Kibana) Familiarity with incident management tools (e.g., PagerDuty) and root cause analysis processes. Basic working knowledge of: Kafka – monitoring topics and consumer health ElastiCache/Redis – caching patterns and diagnostics InfluxDB – time-series data and metrics collection Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 1 month ago

Apply

0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practices the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance, and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such as Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross-functionally within multiple Client organizations. Responsibilities Responsibilities include planning, automation, implementations, and maintenance of the AWS platform and its associated services. Provide SME / L2 and above level technical support. Carry out deployment and migration activities. Must be able to mentor and provide technical guidance to L1 engineers. Monitoring of AWS infrastructure and perform routine maintenance, operational tasks. Work on ITSM tickets and ensure adherence to support SLAs. Work on change management processes. Excellent analytical and problem-solving skills. Exhibits excellent service to others. Location: Noida, Uttar Pradesh, India

Posted 1 month ago

Apply

5.0 years

0 Lacs

West Bengal

On-site

Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: Cloud Engineer Experience- 3-5 years Location : Noida Work Mode: WFO The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Must have excellent communication and verbal skills. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, CloudFormation, CloudWatch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams

Posted 1 month ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities B.Tech/M.Tech from a premier institute with hands on design / development experience in building and operating highly available services, ensuring stability, security and scalability 2+ years of software development experience preferably in product companies Proficiency in the latest technologies like Web Component, React/Vue/Bootstrap, Redux, NodeJS, Type Script, Dynamic web applications, Redis, Memcached, Docker, Kafka, MySQL. Deep understanding of MVC framework and concepts like HTML, DOM, CSS, REST, AJAX, responsive design, Test-Driven Development. Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, Lambda, Gateway, and Amazon DynamoDB, etc... or similar technology stack Experience with Operations (AWS, Terraform, scalability, high availability & security) is a big plus Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them Ability to work proactively and independently with minimal supervision Mandatory Skill Sets Java React Node JS HTML/ CSS XML, JSON, SOAP/REST APIs. AWS Preferred Skill Sets Git CI/CD Docker Kubernetes Unit Testing Years of experience required: 4 – 8 Years Education Qualification BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Full Stack Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline {+ 38 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 month ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 1 month ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company As one of the world’s most innovative software companies whose products touch billions of people around the world, Adobe empowers everyone, everywhere to imagine, create, and bring any digital experience to life. From creators and students to small businesses, global enterprises, and nonprofit organizations — customers choose Adobe products to ideate, collaborate, be more productive, drive business growth, and build remarkable experiences. Our 30,000+ employees worldwide are creating the future and raising the bar as we drive the next decade of growth. We’re on a mission to hire the very best and believe in creating a company culture where all employees are empowered to make an impact. At Adobe, we believe that great ideas can come from anywhere in the organization. The next big idea could be yours. The Opportunity Adobe is revolutionizing digital experiences by empowering users to craft, manage, and share content effortlessly. What you'll Do Build high-quality and performant solutions and features using web technologies. Drive solutioning and architecture discussions in the team and technically guide and mentor the team. Partner with product management for the technical feasibility of the features passionate about user experience as well as performance. Stay proficient in emerging industry technologies and trends, bringing that knowledge to the team to influence product direction. Use a combination of data and instinct to make decisions and move at a rapid pace. Craft a culture of collaboration and shared accomplishments, having fun along the way. What you need to succeed Strong technical background and analytical abilities, with experience developing services based on Java/Javascript and Web Application experiences. An interest in and ability to learn new technologies. Demonstrated results working in a diverse, global, team-focused environment. 10+ years of relevant experience in software engineering with 1+ year being Tech Lead/Architect for engineering teams. Proficiency in the latest technologies like Web Component and TypeScript (or other Javascript frameworks). Familiarity with MVC framework and concepts such as HTML, DOM, CSS, REST, AJAX, responsive design, development with tests. Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, etc., or similar technology stack. Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them. Ability to work proactively and independently with minimal supervision. At Adobe, we believe in creating a company culture where all employees are empowered to make an impact. Learn more about Adobe life, including our values and culture, focus on people, purpose and community, Adobe For All, comprehensive benefits programs, the stories we tell, the customers we serve, and how you can help us change the world through personalized digital experience. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 month ago

Apply

9.0 - 12.0 years

2 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes: Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures of Outcomes: # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected: Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement gathering and Analysis: Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support: Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting: Analysis of technology landscape process tools based on project objectives Business and Technical Research: Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation: Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development: Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence: Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples: Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples: Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments: JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities • Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. • Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy • Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. • Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. • Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). • Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. • Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience • AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM • Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. • Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. • Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). • Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. • Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have • Experience with service mesh (AWS App Mesh, Istio). • Experience with Non-Relational DBs (Neptune, etc.). • Familiarity with event-driven architectures using EventBridge or SNS/SQS. • Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. • Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). • Knowledge of serverless frameworks (Serverless Framework, SAM). • Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We at Innovaccer are looking for a Data Modeler to help us build and maintain our unified data models across different domains and subdomains within healthcare. Innovaccer is the #1 healthcare data platform with a 100% year over year growth. We are building the healthcare cloud to help large healthcare organizations including providers, payers, and health-systems manage and consume their own data with ease. To succeed in this role, you'll need to have a strong technical understanding of databases, have prior experience in building enterprise data models, and be comfortable in working in cross-functional teams with multiple stakeholders. What You Need 5+ years of recent experience in Database management, Data Analytics, and Data warehousing, including cloud-native database and modern data warehouse Strong database experience in analyzing, transforming, and integrating data (preferably in one of the database technologies such as Snowflake, DeltaLake, NoSQL, Postgres) Work with the Data Architect/Solution Architect and application Development team to implement data strategies Create Conceptual, logical, and physical data models using best practices for OLTP/Analytical models to ensure high data quality and reduced redundancy Perform reverse engineering of physical data models from databases and SQL scripts and create ER diagrams Evaluate data models and physical databases for variances and discrepancies Validate business data objects for accuracy and completeness, especially in the Healthcare domain Hands-on experience in building scalable and efficient processes to build/modify data warehouses /data lakes Performance tuning at the database level, SQL Query optimization, Data partitioning & efficient Data Loading strategies Understanding of parquet/JSON/Avro data structures for building schema on evolution design Worked on AWS or Azure cloud architecture in general and usage of MPP compute, Shared storage, autoscaling, object storage such as ADLS, S3 for Data Integrations Prefer experience in Spark's Structured APIs using DataFrames and SQL Good to have Databricks-Delta Lake and Snowflake Data Lake projects Here's What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most Care Program: Whether it's a celebration or a time of need, we've got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need Financial Assistance: Life happens, and when it does, we're here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Location: Ahmedabad/Pune Experience: 8+ Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay) -- Thanks & Regards, Vishva Shah Sr Talent specialist | Zymr, Inc. | www.zymr.com | vishva.shah@zymr.com

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Consultant specializing in AWS Rehost Migration, you will be responsible for leveraging your 8+ years of technical expertise to facilitate the seamless transition of IT infrastructure from On-Prem to any Cloud environment. Your role will involve creating landing zones and overseeing application migration processes. Your key responsibilities will include assessing the source architecture and aligning it with the relevant target architecture within the cloud ecosystem. You must possess a strong foundation in Linux or Windows-based systems administration, with a deep understanding of Storage, Security, and network protocols. Additionally, your proficiency in firewall rules, VPC setup, network routing, Identity and Access Management, and security implementation will be crucial. To excel in this role, you should have hands-on experience with CloudFormation, Terraform templates, or similar automation and scripting tools. Your expertise in implementing AWS services such as EC2, Autoscaling, ELB, EBS, EFS, S3, VPC, RDS, and Route53 will be essential for successful migrations. Furthermore, your familiarity with server migration tools like Platespin, Zerto, Cloud Endure, MGN, or similar platforms will be advantageous. You will also be required to identify application dependencies using discovery tools or automation scripts and define optimal move groups for migrations with minimal downtimes. Your effective communication skills, both verbal and written, will enable you to collaborate efficiently with internal and external stakeholders. By working closely with various teams, you will contribute to the overall success of IT infrastructure migrations and ensure a smooth transition to the cloud environment. If you are a seasoned professional with a passion for cloud technologies and a proven track record in IT infrastructure migration, we invite you to join our team as a Lead Consultant - AWS Rehost Migration.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Join GlobalLogic and become a valuable part of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. As part of our engagement, you will assist in developing end-user modules firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and conducting analysis and estimations of customer requirements. Your responsibilities will include: - Hands-on experience in Cloud Deployment using Terraform - Proficiency in Branching, Merging, Tagging, and maintaining versions across environments using GIT & Jenkins pipelines - Ability to work on Continuous Integration (CI) and End to End automation for all builds and deployments - Experience in impending Continuous Delivery (CD) pipelines - Hands-on experience with all the AWS Services mentioned in Primary Skillset - Strong verbal and written communication skills Primary Skillset: IAM, EC2, ELB, EBS, AMI, Route53, Security Groups, AutoScaling, S3 Secondary Skillset: EKS, Terraform, Cloudwatch, SNS, SQS, Athena At GlobalLogic, you will have the opportunity to work on exciting projects in industries such as High-Tech, communication, media, healthcare, retail, and telecom. You will collaborate with a diverse team of talented individuals in an open, laidback environment and may even have the chance to work in one of our global centers or client facilities. We prioritize work-life balance by offering flexible work schedules, work-from-home options, paid time off, and holidays. Our dedicated Learning & Development team provides opportunities for professional development through communication skills training, stress management programs, professional certifications, and technical and soft skill trainings. In addition to competitive salaries, we offer family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS (National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Our fun perks include sports events, cultural activities, food at subsidized rates, corporate parties, and discounts at popular stores and restaurants. GlobalLogic is a leader in digital engineering, helping brands worldwide design and build innovative products, platforms, and digital experiences. We operate around the world, delivering deep expertise to customers in various industries. As a Hitachi Group Company, we contribute to driving innovation through data and technology to create a sustainable society with a higher quality of life.,

Posted 1 month ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures Of Outcomes # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement Gathering And Analysis Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting Analysis of technology landscape process tools based on project objectives Business And Technical Research Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents Summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have Experience with service mesh (AWS App Mesh, Istio). Experience with Non-Relational DBs (Neptune, etc.). Familiarity with event-driven architectures using EventBridge or SNS/SQS. Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). Knowledge of serverless frameworks (Serverless Framework, SAM). Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: Solution Architect- Presales Experience- 4-6 Years Job description (Roles & Responsibilities) : Pre-Sales Solution Design : Design AWS Cloud Professional Services and AWS Cloud Managed Services solutions based on customer needs and requirements. Customer Requirement Analysis : Engage with customers to understand their requirements and provide cost-effective, technically sound solutions to meet these needs. Proposal Preparation : Develop technical and commercial proposals in response to Requests for Information (RFI), Requests for Quotation (RFQ) and Requests for Proposal (RFP). Technical Presentations : Prepare and deliver technical presentations to clients, demonstrating the value and capability of AWS solutions. Solution Design : Tailor solutions to customer requirements, ensuring they are scalable, efficient, and secure on the AWS platform. Sales Team Support : Work closely with the sales team to support their goals and help close deals, ensuring alignment of solutions with business needs. Creative & Analytical Thinking : Apply creative and analytical problem-solving skills to address complex customer challenges using AWS technology. Collaboration : Collaborate effectively with technical and non-technical teams across the organization. Communication Skills : Excellent verbal and written communication skills, with the ability to clearly articulate solutions to both technical and non-technical audiences. Performance-Oriented : Drive consistent business performance, meeting and exceeding targets while delivering high-quality solutions. Mandatory Skills 4-6 years of experience in cloud infrastructure deployment, migration and managed services. Hands-on experience in planning, designing and implementation of AWS IaaS, PaaS and SaaS services. Hands-on experience of executing end-to-end cloud migration to AWS including the migration discovery, assessment, and execution. Hands-on experience of designing & deploying a multi-account well-architected landing zone on AWS. Hands-on experience of designing & deploying the disaster recovery environment for applications and databases on AWS. Excellent written and verbal communications skills and an ability to maintain a high degree of professionalism in all client communications. Excellent organization, time management, problem-solving, and analytical skills. Ability to work on timeline bound assignments, handle pressure, and focus on results. Intermediate level of hands on experience with essential AWS services such as EC2, Lambda, RDS, DynamoDB, IAM, S3, VPC, AutoScaling, CloudTrail, CloudWatch, SNS, SQS, SES, Direct Connect, S2S VPN, CloudFormation, Config, Systems Manager, Route53, Cost Explorer, Saving Plans & Reserved Instances, Certificate Manager, Migration Hub, Application Migration Service, Database Migration Service, Organization & Control Tower. Good working knowledge of basic infrastructure services such as Active Directory, DNS, Networking, Security Desired Skills Intermediate level of hands on experience with AWS services such as AppStream, WorkSpaces, Elastic BeanStalk, ECS, EKS, Elasticache, Kinesis, CloudFront. Intermediate level of hands on experience with IT orchestration & automation tools such as Ansible, Puppet & Chef. Intermediate level of hands on experience with Terraform, Azure DevOps, AWS Development services such as CodeCommit, CodeBuild, CodePipeline, and CodeDeploy. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 1 month ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: AWS Cloud Engineer Location: Noida The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies