Jobs
Interviews

242 Autoscaling Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a DevOps Engineer This role is Office Based, Pune Office We are looking for a skilled DevOps Engineer with hands-on experience in Kubernetes, CI/CD pipelines, cloud infrastructure (AWS/GCP), and observability tooling. You will be responsible for automating deployments, maintaining infrastructure as code, and optimizing system reliability, performance, and scalability across environments. In this role, you will… Develop and maintain CI/CD pipelines to automate testing, deployments, and rollbacks across multiple environments. Manage and troubleshoot Kubernetes clusters (EKS, AKS, GKE) including networking, autoscaling, and application deployments. Collaborate with development and QA teams to streamline code integration, testing, and deployment workflows. Automate infrastructure provisioning using tools like Terraform and Helm. Monitor and improve system performance using tools like Prometheus, Grafana, and the ELK stack. Set up and maintain Kibana dashboards, and ensure high availability of logging and monitoring systems. Manage cloud infrastructure on AWS and GCP, optimizing for performance, reliability, and cost. Build unified observability pipelines by integrating metrics, logs, and traces. Participate in on-shift rotations, handling incident response and root cause analysis, and continuously improve automation and observability. Write scripts and tools in Bash, Python, or Go to automate routine tasks and improve deployment efficiency. You’ve Got What It Takes If You Have… 3+ years of experience in a DevOps, SRE, or Infrastructure Engineering role. Bachelor's degree in Computer Science, IT, or related field. Strong understanding of Linux systems, cloud platforms (AWS/GCP), and containerized microservices. Proficiency with Kubernetes, CI/CD systems, and infrastructure automation. Experience with monitoring/logging tools: Prometheus, Grafana, InfluxDB ELK stack (Elasticsearch, Logstash, Kibana) Familiarity with incident management tools (e.g., PagerDuty) and root cause analysis processes. Basic working knowledge of: Kafka – monitoring topics and consumer health ElastiCache/Redis – caching patterns and diagnostics InfluxDB – time-series data and metrics collection Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 23 hours ago

Apply

0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practices the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance, and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such as Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross-functionally within multiple Client organizations. Responsibilities Responsibilities include planning, automation, implementations, and maintenance of the AWS platform and its associated services. Provide SME / L2 and above level technical support. Carry out deployment and migration activities. Must be able to mentor and provide technical guidance to L1 engineers. Monitoring of AWS infrastructure and perform routine maintenance, operational tasks. Work on ITSM tickets and ensure adherence to support SLAs. Work on change management processes. Excellent analytical and problem-solving skills. Exhibits excellent service to others. Location: Noida, Uttar Pradesh, India

Posted 1 day ago

Apply

5.0 years

0 Lacs

West Bengal

On-site

Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: Cloud Engineer Experience- 3-5 years Location : Noida Work Mode: WFO The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Must have excellent communication and verbal skills. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, CloudFormation, CloudWatch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams

Posted 2 days ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities B.Tech/M.Tech from a premier institute with hands on design / development experience in building and operating highly available services, ensuring stability, security and scalability 2+ years of software development experience preferably in product companies Proficiency in the latest technologies like Web Component, React/Vue/Bootstrap, Redux, NodeJS, Type Script, Dynamic web applications, Redis, Memcached, Docker, Kafka, MySQL. Deep understanding of MVC framework and concepts like HTML, DOM, CSS, REST, AJAX, responsive design, Test-Driven Development. Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, Lambda, Gateway, and Amazon DynamoDB, etc... or similar technology stack Experience with Operations (AWS, Terraform, scalability, high availability & security) is a big plus Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them Ability to work proactively and independently with minimal supervision Mandatory Skill Sets Java React Node JS HTML/ CSS XML, JSON, SOAP/REST APIs. AWS Preferred Skill Sets Git CI/CD Docker Kubernetes Unit Testing Years of experience required: 4 – 8 Years Education Qualification BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Full Stack Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline {+ 38 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 2 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company As one of the world’s most innovative software companies whose products touch billions of people around the world, Adobe empowers everyone, everywhere to imagine, create, and bring any digital experience to life. From creators and students to small businesses, global enterprises, and nonprofit organizations — customers choose Adobe products to ideate, collaborate, be more productive, drive business growth, and build remarkable experiences. Our 30,000+ employees worldwide are creating the future and raising the bar as we drive the next decade of growth. We’re on a mission to hire the very best and believe in creating a company culture where all employees are empowered to make an impact. At Adobe, we believe that great ideas can come from anywhere in the organization. The next big idea could be yours. The Opportunity Adobe is revolutionizing digital experiences by empowering users to craft, manage, and share content effortlessly. What you'll Do Build high-quality and performant solutions and features using web technologies. Drive solutioning and architecture discussions in the team and technically guide and mentor the team. Partner with product management for the technical feasibility of the features passionate about user experience as well as performance. Stay proficient in emerging industry technologies and trends, bringing that knowledge to the team to influence product direction. Use a combination of data and instinct to make decisions and move at a rapid pace. Craft a culture of collaboration and shared accomplishments, having fun along the way. What you need to succeed Strong technical background and analytical abilities, with experience developing services based on Java/Javascript and Web Application experiences. An interest in and ability to learn new technologies. Demonstrated results working in a diverse, global, team-focused environment. 10+ years of relevant experience in software engineering with 1+ year being Tech Lead/Architect for engineering teams. Proficiency in the latest technologies like Web Component and TypeScript (or other Javascript frameworks). Familiarity with MVC framework and concepts such as HTML, DOM, CSS, REST, AJAX, responsive design, development with tests. Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, etc., or similar technology stack. Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them. Ability to work proactively and independently with minimal supervision. At Adobe, we believe in creating a company culture where all employees are empowered to make an impact. Learn more about Adobe life, including our values and culture, focus on people, purpose and community, Adobe For All, comprehensive benefits programs, the stories we tell, the customers we serve, and how you can help us change the world through personalized digital experience. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 2 days ago

Apply

9.0 - 12.0 years

2 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes: Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures of Outcomes: # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected: Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement gathering and Analysis: Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support: Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting: Analysis of technology landscape process tools based on project objectives Business and Technical Research: Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation: Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development: Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence: Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples: Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples: Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments: JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities • Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. • Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy • Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. • Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. • Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). • Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. • Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience • AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM • Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. • Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. • Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). • Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. • Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have • Experience with service mesh (AWS App Mesh, Istio). • Experience with Non-Relational DBs (Neptune, etc.). • Familiarity with event-driven architectures using EventBridge or SNS/SQS. • Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. • Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). • Knowledge of serverless frameworks (Serverless Framework, SAM). • Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We at Innovaccer are looking for a Data Modeler to help us build and maintain our unified data models across different domains and subdomains within healthcare. Innovaccer is the #1 healthcare data platform with a 100% year over year growth. We are building the healthcare cloud to help large healthcare organizations including providers, payers, and health-systems manage and consume their own data with ease. To succeed in this role, you'll need to have a strong technical understanding of databases, have prior experience in building enterprise data models, and be comfortable in working in cross-functional teams with multiple stakeholders. What You Need 5+ years of recent experience in Database management, Data Analytics, and Data warehousing, including cloud-native database and modern data warehouse Strong database experience in analyzing, transforming, and integrating data (preferably in one of the database technologies such as Snowflake, DeltaLake, NoSQL, Postgres) Work with the Data Architect/Solution Architect and application Development team to implement data strategies Create Conceptual, logical, and physical data models using best practices for OLTP/Analytical models to ensure high data quality and reduced redundancy Perform reverse engineering of physical data models from databases and SQL scripts and create ER diagrams Evaluate data models and physical databases for variances and discrepancies Validate business data objects for accuracy and completeness, especially in the Healthcare domain Hands-on experience in building scalable and efficient processes to build/modify data warehouses /data lakes Performance tuning at the database level, SQL Query optimization, Data partitioning & efficient Data Loading strategies Understanding of parquet/JSON/Avro data structures for building schema on evolution design Worked on AWS or Azure cloud architecture in general and usage of MPP compute, Shared storage, autoscaling, object storage such as ADLS, S3 for Data Integrations Prefer experience in Spark's Structured APIs using DataFrames and SQL Good to have Databricks-Delta Lake and Snowflake Data Lake projects Here's What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most Care Program: Whether it's a celebration or a time of need, we've got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need Financial Assistance: Life happens, and when it does, we're here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Location: Ahmedabad/Pune Experience: 8+ Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay) -- Thanks & Regards, Vishva Shah Sr Talent specialist | Zymr, Inc. | www.zymr.com | vishva.shah@zymr.com

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Consultant specializing in AWS Rehost Migration, you will be responsible for leveraging your 8+ years of technical expertise to facilitate the seamless transition of IT infrastructure from On-Prem to any Cloud environment. Your role will involve creating landing zones and overseeing application migration processes. Your key responsibilities will include assessing the source architecture and aligning it with the relevant target architecture within the cloud ecosystem. You must possess a strong foundation in Linux or Windows-based systems administration, with a deep understanding of Storage, Security, and network protocols. Additionally, your proficiency in firewall rules, VPC setup, network routing, Identity and Access Management, and security implementation will be crucial. To excel in this role, you should have hands-on experience with CloudFormation, Terraform templates, or similar automation and scripting tools. Your expertise in implementing AWS services such as EC2, Autoscaling, ELB, EBS, EFS, S3, VPC, RDS, and Route53 will be essential for successful migrations. Furthermore, your familiarity with server migration tools like Platespin, Zerto, Cloud Endure, MGN, or similar platforms will be advantageous. You will also be required to identify application dependencies using discovery tools or automation scripts and define optimal move groups for migrations with minimal downtimes. Your effective communication skills, both verbal and written, will enable you to collaborate efficiently with internal and external stakeholders. By working closely with various teams, you will contribute to the overall success of IT infrastructure migrations and ensure a smooth transition to the cloud environment. If you are a seasoned professional with a passion for cloud technologies and a proven track record in IT infrastructure migration, we invite you to join our team as a Lead Consultant - AWS Rehost Migration.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Join GlobalLogic and become a valuable part of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. As part of our engagement, you will assist in developing end-user modules firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and conducting analysis and estimations of customer requirements. Your responsibilities will include: - Hands-on experience in Cloud Deployment using Terraform - Proficiency in Branching, Merging, Tagging, and maintaining versions across environments using GIT & Jenkins pipelines - Ability to work on Continuous Integration (CI) and End to End automation for all builds and deployments - Experience in impending Continuous Delivery (CD) pipelines - Hands-on experience with all the AWS Services mentioned in Primary Skillset - Strong verbal and written communication skills Primary Skillset: IAM, EC2, ELB, EBS, AMI, Route53, Security Groups, AutoScaling, S3 Secondary Skillset: EKS, Terraform, Cloudwatch, SNS, SQS, Athena At GlobalLogic, you will have the opportunity to work on exciting projects in industries such as High-Tech, communication, media, healthcare, retail, and telecom. You will collaborate with a diverse team of talented individuals in an open, laidback environment and may even have the chance to work in one of our global centers or client facilities. We prioritize work-life balance by offering flexible work schedules, work-from-home options, paid time off, and holidays. Our dedicated Learning & Development team provides opportunities for professional development through communication skills training, stress management programs, professional certifications, and technical and soft skill trainings. In addition to competitive salaries, we offer family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS (National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Our fun perks include sports events, cultural activities, food at subsidized rates, corporate parties, and discounts at popular stores and restaurants. GlobalLogic is a leader in digital engineering, helping brands worldwide design and build innovative products, platforms, and digital experiences. We operate around the world, delivering deep expertise to customers in various industries. As a Hitachi Group Company, we contribute to driving innovation through data and technology to create a sustainable society with a higher quality of life.,

Posted 3 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures Of Outcomes # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement Gathering And Analysis Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting Analysis of technology landscape process tools based on project objectives Business And Technical Research Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents Summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have Experience with service mesh (AWS App Mesh, Istio). Experience with Non-Relational DBs (Neptune, etc.). Familiarity with event-driven architectures using EventBridge or SNS/SQS. Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). Knowledge of serverless frameworks (Serverless Framework, SAM). Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration

Posted 3 days ago

Apply

4.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: Solution Architect- Presales Experience- 4-6 Years Job description (Roles & Responsibilities) : Pre-Sales Solution Design : Design AWS Cloud Professional Services and AWS Cloud Managed Services solutions based on customer needs and requirements. Customer Requirement Analysis : Engage with customers to understand their requirements and provide cost-effective, technically sound solutions to meet these needs. Proposal Preparation : Develop technical and commercial proposals in response to Requests for Information (RFI), Requests for Quotation (RFQ) and Requests for Proposal (RFP). Technical Presentations : Prepare and deliver technical presentations to clients, demonstrating the value and capability of AWS solutions. Solution Design : Tailor solutions to customer requirements, ensuring they are scalable, efficient, and secure on the AWS platform. Sales Team Support : Work closely with the sales team to support their goals and help close deals, ensuring alignment of solutions with business needs. Creative & Analytical Thinking : Apply creative and analytical problem-solving skills to address complex customer challenges using AWS technology. Collaboration : Collaborate effectively with technical and non-technical teams across the organization. Communication Skills : Excellent verbal and written communication skills, with the ability to clearly articulate solutions to both technical and non-technical audiences. Performance-Oriented : Drive consistent business performance, meeting and exceeding targets while delivering high-quality solutions. Mandatory Skills 4-6 years of experience in cloud infrastructure deployment, migration and managed services. Hands-on experience in planning, designing and implementation of AWS IaaS, PaaS and SaaS services. Hands-on experience of executing end-to-end cloud migration to AWS including the migration discovery, assessment, and execution. Hands-on experience of designing & deploying a multi-account well-architected landing zone on AWS. Hands-on experience of designing & deploying the disaster recovery environment for applications and databases on AWS. Excellent written and verbal communications skills and an ability to maintain a high degree of professionalism in all client communications. Excellent organization, time management, problem-solving, and analytical skills. Ability to work on timeline bound assignments, handle pressure, and focus on results. Intermediate level of hands on experience with essential AWS services such as EC2, Lambda, RDS, DynamoDB, IAM, S3, VPC, AutoScaling, CloudTrail, CloudWatch, SNS, SQS, SES, Direct Connect, S2S VPN, CloudFormation, Config, Systems Manager, Route53, Cost Explorer, Saving Plans & Reserved Instances, Certificate Manager, Migration Hub, Application Migration Service, Database Migration Service, Organization & Control Tower. Good working knowledge of basic infrastructure services such as Active Directory, DNS, Networking, Security Desired Skills Intermediate level of hands on experience with AWS services such as AppStream, WorkSpaces, Elastic BeanStalk, ECS, EKS, Elasticache, Kinesis, CloudFront. Intermediate level of hands on experience with IT orchestration & automation tools such as Ansible, Puppet & Chef. Intermediate level of hands on experience with Terraform, Azure DevOps, AWS Development services such as CodeCommit, CodeBuild, CodePipeline, and CodeDeploy. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 4 days ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: AWS Cloud Engineer Location: Noida The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 4 days ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Senior Backend & DevOps Engineer (AI-Integrated Products) Location: Remote Employment Type: Full Time/Freelance/Part Time Work Hours: Flexibile Work Timing About Us: We’re building AI-powered products that seamlessly integrate technology into everyday routines. We're now looking for a Senior Backend & DevOps Engineer who can help us scale Mobile Apps globally and architect the backend for it, while owning infrastructure, performance, and AI integrations. Responsibilities: Backend Engineering: Own and scale the backend architecture (Node.js/Express or similar) for Mobile Apps Build robust, well-documented, and performant APIs Implement user management, session handling, usage tracking, and analytics Integrate 3rd-party services including OpenAI, Whisper, and other LLMs Optimize app-server communication and caching for global scale DevOps & Infrastructure: Maintain and scale AWS/GCP infrastructure (EC2, RDS, S3, CloudFront/CDN, etc) Set up CI/CD pipelines (GitHub Actions preferred) for smooth deployment Monitor performance, set up alerts, and handle autoscaling across regions Manage cost-effective global infra scaling and ensure low-latency access Handle security (IAM, secrets management, HTTPS, CORS policies, etc) AI & Model Integration: Integrate LLMs like GPT-4, Mistral, Mixtral, and open-source models Support fine-tuning, inference pipelines, and embeddings Build offline inference support and manage transcription workflows (Whisper, etc) Set up and optimize vector DBs like Qdrant, Weaviate, Pinecone Requirements: 4+ years of backend experience with Node.js, Python, or Go 2+ years of DevOps experience with AWS/GCP/Azure, Docker, and CI/CD Experience deploying and managing AI/ML pipelines, especially LLMs and Whisper Familiarity with vector databases, embeddings, and offline inference Strong understanding of performance optimization, scalability, and observability Clear communication skills and a proactive mindset Bonus Skills: Experience working on mobile-first apps (React Native backend knowledge is a plus) Familiarity with Firebase, Vercel, Railway, or similar platforms Knowledge of data privacy, GDPR, and offline sync strategies Past work on productivity, journaling, or health/fitness apps Experience self-hosting LLMs or optimizing AI pipelines on edge/cloud Please share(Optional): A brief intro about you and your experience Links to your GitHub/portfolio or relevant projects Resume or LinkedIn profile Any AI/infra-heavy work you’re particularly proud of Contact : subham@binaryvlue.com

Posted 4 days ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Role Overview: We are looking for a skilled System Architect with proven experience in designing and scaling SaaS platforms. You will play a critical role in translating product vision into scalable, secure, and maintainable architecture for products Key Responsibilities: Architectural Planning: Design scalable, robust, and cloud-native architecture for SaaS platforms. Define the system blueprint including microservices, API layers, databases, and third-party integrations. Technology Decision-Making: Select appropriate tech stacks and tools for backend, frontend, deployment, CI/CD, and monitoring. Evaluate trade-offs and maintain a balance between business needs and technical feasibility. Documentation & Standards: Create architecture documents, flow diagrams, and data models. Define coding, security, and testing standards for the team. Collaboration: Work closely with Product Owners, Developers, DevOps, and QA to ensure smooth implementation of architecture. Review and guide code and design decisions across teams. Security & Compliance: Implement architecture that adheres to security best practices (OWASP, GDPR, ISO). Ensure data protection and role-based access design from day one. Performance & Scalability: Plan for load balancing, caching, latency optimization, and database optimization. Conduct performance testing and guide mitigation strategies. Required Skills: - Strong experience in SaaS application architecture (multi-tenant, modular). - Expertise in cloud platforms (AWS, GCP, or Azure). - Solid understanding of microservices, RESTful APIs, GraphQL, and event-driven systems. - Familiar with database design (SQL and NoSQL) and data modeling. - Experience with DevOps practices (CI/CD pipelines, containers, monitoring). - Knowledge of security standards and compliance frameworks for SaaS. - Ability to create UML diagrams, sequence flows, and ERDs. - Excellent communication and documentation skills. Preferred Qualifications: - Hands-on experience with SaaS platforms in AdTech, SEO, or Martech domains. - Experience with multi-region deployments, autoscaling, and serverless architecture. - Worked on legacy migrations or version upgrades of SaaS platforms. - Familiarity with AI/ML integration in SaaS (a plus).

Posted 4 days ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Designation - AVP – Engineering (Head of Engineering-My11Circle) About Us Games24x7 is a technology-powered company delivering cutting-edge online gaming experiences across multiple formats. Our brand My11Circle is one of India’s largest fantasy sports platforms, and PokerCircle is one of the leading real-money poker platforms in the country. Together, they represent our commitment to building world-class skill-based games with immersive, high-performance experiences. At My11Circle, scale is in our DNA. Our systems handle extreme workloads — 400,000 database writes per second with 99th percentile latencies under 15 ms, and 18,000+ contest join requests per second during events like the toss time in a cricket match. Our poker platform, PokerCircle, delivers secure, real-time multiplayer gameplay with low-latency networking, complex matchmaking, and game logic — all backed by scalable microservices and robust backend design. We operate a modern tech stack – ReactJS and React Native on the front-end, Java and Node.js on the back-end, with MySQL, MongoDB, DynamoDB, ScyllaDB, Cassandra etc. on the data layer. Our cloud-native infrastructure is built on AWS, with ML-powered autoscaling to anticipate traffic spikes and maintain top-tier performance. If solving real-time, high-performance engineering challenges at massive scale — while building best-in-class consumer products — excites you, then you’ll thrive in this role. Role Overview We’re looking for an AVP of Engineering to lead product engineering for both My11Circle and PokerCircle . This includes end-to-end ownership of the technical execution, architecture, quality, and scalability for two of India’s largest and fastest-growing gaming platforms. You’ll lead a talented team of ~50 engineers across multiple pods, working closely with product, design, analytics, and business stakeholders. The role demands deep technical expertise, strong people leadership, and a sharp product mindset. You'll ensure our platforms remain fast, reliable, and user-centric — supporting tens of millions of users during major sporting and gaming events. This is a hands-on technical leadership role with high autonomy, where you’ll influence strategy, drive innovation, and lead engineering execution across two highly dynamic and exciting products. What You’ll Do Lead Product Engineering Across Fantasy & Poker: Own engineering delivery for My11Circle (fantasy sports) and PokerCircle (online poker). Shape the technical roadmap in alignment with product goals, and ensure consistent innovation, speed, and scale across both platforms. Manage & Mentor Engineering Teams: Lead and grow a team of ~50 engineers across full-stack, backend, infrastructure, and data engineering. Build and nurture a high-performance culture with strong ownership, technical depth, and business alignment. Product-Centric Execution: Collaborate closely with Product Managers and Designers to translate business goals and product requirements into robust, scalable technical solutions. Ensure fast delivery of high-impact features with strong user value. Architect Scalable Systems: Design and oversee large-scale distributed architectures capable of supporting 50M+ users , ensuring low-latency performance and reliability. Tackle concurrency, fault tolerance, and scaling challenges for real-time gameplay and contests. Real-Time Game Systems: Build and optimize real-time gaming infrastructure, including multiplayer systems, contest engines, leaderboards , and matchmaking pipelines. Ensure smooth gameplay even during extreme concurrency (e.g., toss events or poker rush hours). Data Infrastructure & Optimization: Oversee our data stack – from relational databases to distributed NoSQL systems. Implement sharding, caching, and replication strategies to handle 400K+ writes/sec while maintaining consistency, reliability, and performance. ML-Driven Infrastructure Scaling: Own and evolve our ML-powered infrastructure autoscaling platform. Use predictive models to provision capacity ahead of major traffic surges (e.g., IPL finals) and optimize costs through smart resource management. Cross-Functional Collaboration: Partner with business, design, data science, and customer teams to align engineering with user and business outcomes. Clearly communicate engineering trade-offs, timelines, and risks across the org. Drive Technical Strategy & Innovation: Constantly evaluate emerging tools, frameworks, and architectural patterns. Make forward-looking decisions that modernize our stack, improve developer productivity, and drive competitive differentiation. Champion Engineering Excellence: Uphold high standards in code quality, testing, monitoring, observability, security, and incident response. Foster a culture of continuous improvement and operational excellence across teams. What We’re Looking For Experience: 15+ years in software development, with at least 5+ years in senior engineering leadership roles. Experience leading large, cross-functional teams and owning product engineering end-to-end. Product Engineering Leadership: Proven track record of building and scaling consumer-facing platforms . Strong experience working with product teams to define roadmaps, shape feature sets, and iterate based on customer feedback. Expertise in Distributed Systems: Deep understanding of high-throughput architectures, real-time systems, messaging queues, database optimization, and caching strategies. Cloud & DevOps Mastery: Hands-on experience with cloud-native platforms (preferably AWS), containers, CI/CD pipelines, monitoring tools, and infrastructure-as-code. Experience with auto-scaling, cost optimization, and ML-backed ops is a big plus. People & Stakeholder Leadership: Strong team-building and mentoring skills. Ability to communicate clearly with both technical and non-technical stakeholders and manage complex cross-functional programs. Execution-Focused Mindset: Bias toward action and results. Ability to make fast, pragmatic decisions in dynamic environments while building towards long-term vision. Preferred Qualifications Gaming or Consumer-Tech Background: Experience building platforms in gaming, fantasy sports, poker, or real-money gaming environments is a big plus. Passion for gaming and understanding the user mindset in live games will set you apart. Tech Stack Familiarity: Exposure to ReactJS, React Native, Java, Node.js, MySQL, Cassandra, ScyllaDB, Kafka, Redis, AWS, Kubernetes. Startup & Scale Experience: Background in startups or fast-growth environments. You've scaled both systems and teams in high-pressure scenarios. ML-Enabled Systems: Experience applying ML in engineering — e.g., for personalization, fraud prevention, or infra scaling — is a bonus. Why Join Us Impact at Scale: Your work will directly impact tens of millions of users and shape the gaming experiences of India’s top fantasy and poker platforms. Full Product Ownership: You’ll lead product engineering across two distinct gaming domains — influencing everything from gameplay to performance to reliability. Innovation-Friendly Culture: We encourage experimentation, support hackathons, and give you the autonomy to try bold ideas with clear user impact. Modern Tech & High-Caliber Team: Work on some of the most demanding real-time systems in the country, with a smart, passionate team that loves tech and games. Stable Yet Agile: Enjoy the dynamism of a startup, backed by the stability and resources of a profitable company that’s scaling rapidly. Come join us at Games24x7 to lead the engineering behind adrenaline at scale. Build the future of fantasy sports and poker for India and beyond.

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

delhi

On-site

As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),

Posted 5 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description 66degrees is seeking a Senior Consultant with specialized expertise in AWS, Resources will lead and scale cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP and Kubernetes environments. You will be responsible for designing and maintaining highly scalable, resilient, and cost- optimized infrastructure while implementing best-in-class DevOps practices, CI/CD pipelines, and observability solutions. As a key part of our clients platform engineering team, you will collaborate closely with developers, SREs, and security teams to automate workflows, optimize cloud performance, and build the backbone of their microservices candidates should have the ability to overlap with US working hours, be open to occasional weekend work and be local to offices in either Noida, or Gurgaon, India as this is an in-office opportunity. Qualifications 7+ years of hands-on DevOps experience with proven expertise in AWS; involvement in SRE or Platform Engineering roles is desirable. Experience handling high-throughput workloads with occasional spikes. Prior industry experience with live sports and media streaming. Deep knowledge of Kubernetes architecture, managing workloads, networking, RBAC and autoscaling is required. Expertise in AWS Platform with hands-on VCP, IAM, EC2, Lambda, RDS, EKS and S3 experience is required; the ability to learn GCP with GKE is desired. Experience with Terraform for automated cloud provisioning; Helm is desired. Experience with FinOps principles for cost-optimization in cloud environments is required. Hands-on experience building highly automated CI/CD pipelines using Jenkins, ArgoCD, and GitHub Actions. Hands-on experience with service mesh technologies (Istio, Linkerd, Consul) is required. Knowledge of monitoring tools such as CloudWatch, Google Logging, and distributed tracing tools like Jaeger; experience with Prometheus and Grafana is desirable. Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable. Strong knowledge of DNS, routing, load balancing, VPN, firewalls, WAF, TLS, and IAM. Experience managing MongoDB, Kafka or Pulsar for large-scale data processing is desirable. Proven ability to troubleshoot production issues, optimize system performance, and prevent downtime. Knowledge of multi-region disaster recovery and high-availability architectures. Desired Contributions to open-source DevOps projects or strong technical blogging presence. Experience with KEDA-based autoscaling in Kubernetes. (ref:hirist.tech)

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

You have an exciting opportunity to join as a DevSecOps in Sydney. As a DevSecOps, you should have 3+ years of extensive Python proficiency and 3+ years of Java Experience. Your role will also require extensive exposure to technologies such as Javascript, Jenkins, Code Pipeline, CodeBuild, and AWS" ecosystem including AWS Well Architected Framework, Trusted Advisor, GuardDuty, SCP, SSM, IAM, and WAF. It is essential for you to have a deep understanding of automation, quality engineering, architectural methodologies, principles, and solution design. Hands-on experience with Infrastructure-As-Code tools like CloudFormation and CDK will be preferred for automating deployments in AWS. Moreover, familiarity with operational observability, including log aggregation, application performance monitoring, deploying auto-scaling and load-balanced / Highly Available applications, and managing certificates (client-server, mutual TLS, etc) is crucial for this role. Your responsibilities will include improving the automation of security controls, working closely with the consumer showback team on defining processes and system requirements, and designing and implementing updates to the showback platform. You will collaborate with STO/account owners to uplift the security posture of consumer accounts, work with the Onboarding team to ensure security standards and policies are correctly set up, and implement enterprise minimum security requirements from the Cloud Security LRP, including Data Masking, Encryption monitoring, Perimeter protections, Ingress / Egress uplift, and Integration of SailPoint for SSO Management. If you have any questions or need further clarification, feel free to ask.,

Posted 6 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities Indicative years of experience: 4-6 years (At-least 2 years of strong AWS hands-on experience) Role Description We are looking for a Senior Software Engineer who can work closely with the team to develop on-prim/cloud solutions using Typescript, Java and other scripting language. The person should be having good exposure to AWS managed service and can pair with Leads for developing cloud-based solutions for customers. Roles also required a good understanding of Extreme engineering practices like TDD, Unit test coverage, Pai-programming, clean code practices etc. Reporting Relationship This role will report to Delivery Manager / Senior Delivery Manager. Key Responsibilities Work independently in developing solutions at AWS and On-prim environment. Work closely with Tech leads for building strong design and engineering practices in the team. Effectively Pair with team members and Tech leads for building or maintaining a strong code Quality framework. Work closely with Scrum master for implementing Agile best practices in the team. Work closely with Product owners for defining the user stories. Work independently on production incidents reported by business partners to provide resolution within defined SLAs, coordinate with other teams as needed. Act as an interface between the business and technical teams and communicate effectively. Document problem resolutions and new learning for future use, update SOPs Monitor system availability and communicate system outages to business and technical teams. Provide support to resolve complex system problems, triage system issues beyond resolution to appropriate technical teams. Assist in analyzing, maintaining, implementing, testing and documenting system changes and fixes. Provide training to new team members and other teams on business processes and applications. Manage the overall software development workflow. Provide permanent resolutions for repeating issues. Build automation for repetitive tasks. Qualifications Skills required: Good exposure on Type script , AWS Cloud Development Core Java, Java 8 frameworks, Java scripting, Expertise on spring boot and Spring MVC. Experience on AWS DB's ecosystem , RDBMS or NoSQL Databases, Good exposure to SQLs. Good Exposure to Extreme engineering practices like TDD, Unit test coverage, Clean code practices, Pai-programming, mobbing, Incremental value delivery etc. Understanding and exposure to microservice architecture. Domain Driven Desging and Federeation exposure would be an addtion. Good Hands-on Experience with the core AWS services (EC2, IAM, ECS, Cloud Formation, VPC, Security Groups, Nat Instance, Autoscaling Lamda, SNS/SQS, S3, Event Driven services etc). Strong notions of security best practices (e.g. using IAM Roles, KMS, etc.). Experience with monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Experience with building or maintaining cloud-native applications. Past experience with the serverless approaches using AWS Lambda is a plus.

Posted 6 days ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities Indicative years of experience: 2-4 years (At-least 1 year of strong AWS hands-on experience) Role Description We are looking for a Software Engineer who can work closely with the team to develop on-prim/cloud solutions using Typescript, Java and other scripting language. The person should be having good exposure to AWS managed service and can pair with Leads for developing cloud-based solutions for customers. Roles also required a exposure to Extreme engineering practices like TDD, Unit test coverage, Pai-programming, clean code practices etc. Reporting Relationship This role will report to Delivery Manager / Senior Delivery Manager. Key Responsibilities Work independently to medium & complex development stories on AWS and On-prim environment. Work closely with Sr. Software engineers while doing development and effectively using engineering practices as a part of the development. Effectively Pair with team members and Sr. Engineers in the team for building or maintaining a strong code Quality framework. Work closely with Scrum master for implementing Agile best practices in the team. Work closely with Product owners while participating in groomin and developing user stories. Work independently on production incidents reported by business partners to provide resolution within defined SLAs, coordinate with other teams as needed. Document problem resolutions and new learning for future use, update SOPs Monitor system availability and communicate system outages to business and technical teams. Provide support to resolve complex system problems, triage system issues beyond resolution to appropriate technical teams. Assist in analyzing, maintaining, implementing, testing and documenting system changes and fixes. Provide training to new team members and other teams on business processes and applications. Manage the overall software development workflow. Provide permanent resolutions for repeating issues. Build automation for repetitive tasks. Qualifications Skills required: Good exposure on Type script , AWS Cloud Development Core Java, Java 8 frameworks, Java scripting, Expertise on spring boot and Spring MVC. Experience on AWS DB's ecosystem , RDBMS or NoSQL Databases, Good exposure to SQLs. Good Exposure to Extreme engineering practices like TDD, Unit test coverage, Clean code practices, Pai-programming, mobbing, Incremental value delivery etc. Understanding and exposure to microservice architecture. Domain Driven Desging and Federeation exposure would be an addtion. Good Hands-on Experience with the core AWS services (EC2, IAM, ECS, Cloud Formation, VPC, Security Groups, Nat Instance, Autoscaling Lamda, SNS/SQS, S3, Event Driven services etc). Understand and follow the security best practices (e.g. using IAM Roles, KMS, etc.). Experience with monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Past experience with the serverless approaches using AWS Lambda is a plus.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies