Jobs
Interviews

177 Cloudwatch Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 10 Lacs

Mumbai

Work from Office

Job Summary: We are looking for a highly skilled Senior Software Engineer to lead the development of scalable, high-performance web applications. You will work across the full stack, focusing on frontend with Next.js , backend APIs with Django REST Framework , and database design in PostgreSQL . The ideal candidate is also experienced in cloud deployment on AWS , and has strong proficiency in HTML, CSS, and JavaScript for crafting seamless UI/UX. Key Responsibilities: Design, develop, and maintain full-stack applications using Next.js and Django REST Framework . Architect and optimize PostgreSQL schemas and queries for performance and reliability. Deploy, monitor, and scale applications on AWS (EKS,EC2, S3, RDS, CloudWatch, etc.) . Build responsive, accessible, and performant frontend interfaces using HTML, CSS, JavaScript , and modern web standards. Collaborate with product managers, UI/UX designers, and other developers to define and deliver product features. Perform code reviews, write unit and integration tests, and enforce best practices in security and performance. Lead technical discussions and mentor junior developers. Must-Have Skills: 5+ years of hands-on experience in full-stack web development. Frontend : Expert in Next.js , React, HTML5, CSS3, JavaScript (ES6+). Backend : Deep experience with Django and Django REST Framework . Database : Strong in PostgreSQL , including query optimization and migrations. DevOps/Cloud : Proficient in AWS services (EKS, EC2, S3, RDS, IAM, CloudFront, etc.), Docker, CI/CD pipelines. Experience with RESTful APIs and modern authentication methods (JWT, OAuth). Familiar with Git workflows, Jira, and Agile development. Nice-to-Have Skills: Experience with serverless architecture (AWS Lambda, API Gateway). Familiarity with GraphQL, Redis, or Elasticsearch. Knowledge of accessibility (a11y) and web standards. Prior experience in a SaaS or D2C platform. Soft Skills: Strong problem-solving and debugging skills. Excellent written and verbal communication. Ability to work independently with minimal supervision. Leadership qualities and mentorship mindset. Education: Bachelors or Masters degree in Computer Science, Engineering, or related field (preferred but not mandatory with strong experience).

Posted 4 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Principal Database Reliability Engineer (DBRE) at our high-growth E-commerce client based in Bangalore, you will be an integral part of revolutionizing how millions of households shop for groceries and daily essentials through our tech-first approach to supply chain and fulfillment operations. With over 10 years of experience in a fast-paced consumer tech environment, you will be responsible for architecting, automating, and securing the companys MySQL/PostgreSQL infrastructure on AWS. Your role will involve designing and operating scalable, secure, and automated MySQL/PostgreSQL infrastructure on AWS, including RDS and EC2 services. You will lead efforts in Infrastructure-as-Code using Terraform and scripting languages like Python, Bash, and Go to automate repetitive database operations. Optimizing query, schema, and system performance for high-throughput workloads will be a key focus area, along with executing major version upgrades with rollback planning and zero downtime. In this position, you will architect scale-out strategies such as replication, sharding, ProxySQL, Vitess, and Percona to support the companys growth across 30+ cities and 15M+ monthly orders. Implementing partitioning and data archival strategies to ensure long-term growth and compliance with security standards like SOC 2, PCI DSS, and ISO 27001 will also be part of your responsibilities. Furthermore, you will drive security hardening efforts by implementing role-based access, encryption, and audit controls. Monitoring database health using tools like CloudWatch, Prometheus, Grafana, or PMM, supporting incident management, RCA, and postmortems, and contributing to systemic reliability improvements will also be essential tasks. Additionally, you will mentor other engineers and guide best practices across teams on database architecture and performance.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be responsible for leading as a Cloud App Developer at Wipro Limited, a leading technology services and consulting company that specializes in creating innovative solutions for complex digital transformation needs. With a global presence spanning over 65 countries and a workforce of more than 230,000 employees and partners, we are committed to helping our clients, colleagues, and communities thrive in a dynamic world. As a Lead Cloud App Developer, you will need to possess expertise in Terraform, AWS, and DevOps. Additionally, you should hold certifications such as AWS Certified Solution Architect Associate and AWS Certified DevOps Engineer Professional. Your role will involve leveraging your IT experience of more than 6 years to set up and maintain ECS solutions, design AWS solutions with various services like VPC, EC2, WAF, ECS, ALB, IAM, KMS, and others. Furthermore, you will be expected to have experience with AWS services like SNS, SQS, EventBridge, RDS, Aurora DB, Postgres DB, DynamoDB, Redis, AWS Glue jobs, AWS Lambda, CI/CD using Azure DevOps, GitHub for source code management, and building cloud-native applications. Your responsibilities will also include working with container technologies like docker, configuring logging and monitoring solutions like CloudWatch and OpenSearch, and managing system configurations using Terraform and Terragrunt. In addition to technical skills, you should possess strong communication and collaboration abilities, be a team player, have excellent analytical and problem-solving skills, and understand Agile methodologies. Your role will also involve training others in procedural and technical topics, recommending process and architecture improvements, and troubleshooting distributed systems. Join us at Wipro to be a part of our journey to reinvent our business and industry. We are looking for individuals who are inspired by reinvention and are committed to evolving themselves, their careers, and their skills. Be a part of a purpose-driven business that empowers you to shape your reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are warmly welcomed. Experience Required: 5-8 Years To learn more about Wipro Limited, visit www.wipro.com.,

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

You are a passionate and customer obsessed AWS Solutions Architect looking to join Workmates, the fastest growing partner to the worlds major cloud provider, AWS. Your role will involve driving innovation, building differentiated solutions, and defining new customer experiences to help customers maximize their AWS potential in their cloud journey. Working alongside industry specialist organizations and technology groups, you will play a key role in leading our customers towards native cloud transformation. Choosing Workmates and the AWS Practice will enable you to elevate your AWS experience and skills in an innovative and collaborative environment. At Workmates, you will have the opportunity to lead the worlds AWS growing partner in pioneering cloud transformation and be at the forefront of cloud advancements. Join Workmates in delivering innovative work as part of your extraordinary career. People are considered the biggest assets at Workmates, and together we aim to achieve best-in-class cloud native operations. Be part of our mission to drive innovations across Cloud Management, Media, DevOps, Automation, IoT, Security, and more, where independence and ownership are valued, allowing you to thrive and contribute your best. Responsibilities: - Building and maintaining cloud infrastructure environments - Ensuring availability, performance, security, and scalability of production systems - Collaborating with application teams to implement DevOps practices - Creating solution prototypes and conducting proof of concepts for new tools - Designing repeatable, automated, and scalable processes to enhance efficiency - Automating and streamlining operations and processes - Troubleshooting and diagnosing issues/outages and providing operational support - Engaging in incident handling and supporting a culture of post-mortem and knowledge sharing Requirements: - 2+ years of hands-on experience in building and supporting large-scale environments - Strong Architecting and Implementation Experience with AWS Cloud - Proficiency in AWS CloudFormation and Terraform - Experience in Docker Containers and container environment deployment - Good understanding and work experience in Kubernetes and EKS - Sysadmin and infrastructure background (Linux internals, filesystems, networking) - Proficiency in scripting, particularly writing Bash scripts - Familiarity with CI/CD pipeline build and release - Experience with CICD tools like Jenkins/GitLab/TravisCI - Hands-on experience with AWS Developer tools such as AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS Step Function, etc. - Experience in log management solutions (ELK/EFK or similar) - Experience with Configuration Management tools like Ansible or similar - Proficiency in modern Monitoring and Alerting tools like CloudWatch, Prometheus, Grafana, Opsgenie, etc. - Strong passion for automating routine tasks and solving production issues - Experience in automation testing, script generation, and integration with CI/CD - Familiarity with AWS Security features (IAM, Security Groups, KMS, etc.) - Good to have experience in database technologies (MongoDB/MySQL, etc.) Desired Skills: - AWS Professional Certifications - CKA/CKAD Certifications - Knowledge of Python/Go - Experience with Service Mesh and Distributed tracing - Familiarity with Scrum/Agile methodology Join Workmates and be part of a team that values innovation, collaboration, and continuous improvement in the cloud technology landscape. Your expertise and skills will play a crucial role in driving customer success and shaping the future of cloud solutions.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer at JPMorgan Chase within the Consumer and Community Banking, Consumer Card Technology team, you play a crucial role in an agile team dedicated to enhancing, building, and delivering trusted market-leading technology products in a secure, stable, and scalable manner. You are the core technical contributor responsible for implementing innovative software solutions across various technical domains to support the business objectives of the firm. You will be involved in executing creative software solutions by designing, developing, testing, and troubleshooting technically challenging issues with a focus on unconventional approaches. Additionally, you will lead evaluation sessions with external vendors, startups, and internal teams to assess architectural designs and technical applicability for integration into existing systems. Your role involves developing, debugging, and maintaining high-quality code in a corporate environment using modern programming languages and database querying languages. You will also be responsible for ensuring software quality assurance best practices, testing methodologies, and cloud testing strategies. Furthermore, you will gather, analyze, and visualize data sets to drive continuous improvement in software applications. Identifying opportunities to automate remediation of recurring issues and improve operational stability will be a key aspect of your responsibilities. You will lead communities of practice within Software Engineering to promote the adoption of new technologies and contribute to a culture of diversity, equity, inclusion, and respect. For this role, you are required to have formal training or certification in software engineering concepts along with at least 5 years of practical experience. Hands-on experience in system design, application development, testing, and operational stability in a cloud environment, specifically AWS, is essential. Proficiency in Java, Spring Boot, Kubernetes (EKS), and expertise in automated testing, continuous delivery ideologies, Agile methodologies, and Applicant Resiliency and Security are necessary qualifications. You should also possess in-depth knowledge of distributed cloud deployments, cloud native architectures, and the financial services industry. Preferred qualifications include experience with Terraform, AWS Aurora Postgres, DynamoDB, EC2, CloudWatch, and other relevant technologies. Effective communication skills to convey technical directives across all organizational levels are highly valued for this role.,

Posted 1 week ago

Apply

5.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Managed Services Provider (MSP), we are looking for an experienced TechOps Lead to take charge of our cloud infrastructure operations team. Your primary responsibility will be ensuring the seamless delivery of high-quality, secure, and scalable managed services across multiple customer environments, predominantly on AWS and Azure. In this pivotal role, you will serve as the main point of contact for customers, offering strategic technical direction, overseeing day-to-day operations, and empowering a team of cloud engineers to address complex technical challenges. Conducting regular governance meetings with customers, you will provide insights and maintain strong, trust-based relationships. As our clients explore AI workloads and modern platforms, you will lead the team in rapidly adopting and integrating new technologies to keep us ahead of evolving industry trends. Your key responsibilities will include: - Acting as the primary technical and operational contact for customer accounts - Leading governance meetings with customers to review SLAs, KPIs, incident metrics, and improvement initiatives - Guiding the team in diagnosing and resolving complex technical problems in AWS, Azure, and hybrid environments - Ensuring adherence to best practices in cloud operations, infrastructure-as-code, security, cost optimization, monitoring, and compliance - Staying updated on emerging cloud, AI, and automation technologies to enhance our service offerings - Overseeing incident, change, and problem management activities to ensure SLA compliance - Identifying trends from incidents and metrics and driving proactive improvements - Establishing runbooks, standard operating procedures, and automation to reduce toil and improve consistency To be successful in this role, you should possess: - 12+ years of overall experience with at least 5 years managing or delivering cloud infrastructure services on Azure and/or AWS - Strong hands-on skills in Terraform, DevOps tools, monitoring, logging, alerting, and exposure to AI workloads - Solid understanding of networking, security, IAM, and cost optimization in cloud environments - Experience leading technical teams in a managed services or consulting environment - Ability to quickly learn new technologies and guide the team in adopting them to solve customer problems Nice to have skills include exposure to container platforms, multi-cloud cost management tools, AI ML Ops services, security frameworks, and relevant certifications like AWS Solutions Architect, Azure Administrator, or Terraform Associate.,

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Role & responsibilities We are looking for an experienced Cloud Engineer with a strong foundation in cloud infrastructure, DevOps, monitoring, and cost optimization. The ideal candidate will be responsible for designing scalable architectures, implementing CI/CD pipelines, and managing secure and efficient cloud environments using AWS, GCP, or Azure. Key Responsibilities : - Design and deploy scalable, secure, and cost-optimized infrastructure across cloud platforms (AWS, GCP, or Azure) - Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions - Set up infrastructure monitoring, alerting, and logging systems (e.g., CloudWatch, Prometheus, Grafana) - Collaborate with development and architecture teams to implement cloud-native solutions - Manage infrastructure security, IAM policies, backups, and disaster recovery strategies - Drive cloud cost control initiatives and resource optimization - Troubleshoot production and staging issues related to infrastructure and deployments Requirements Must-Have Skills: - 5-7 years of experience working with cloud platforms (AWS, GCP, or Azure) - Strong hands-on experience in infrastructure provisioning and automation - Expertise in DevOps tools and practices, especially CI/CD pipelines - Good understanding of network configurations, VPCs, firewalls, IAM, and security best practices - Experience with monitoring and log aggregation tools - Solid knowledge of Linux system administration - Familiarity with Git and version control workflows Good to Have: - Experience with Infrastructure as Code tools (Terraform, CloudFormation, Pulumi) - Working knowledge of Kubernetes or other container orchestration platforms (EKS, GKE, AKS) - Exposure to scripting languages like Python, Bash, or PowerShell - Familiarity with serverless architecture and event-driven designs - Awareness of cloud compliance and governance frameworks Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Pune

Hybrid

Greetings from Intelliswift- An LTTS Company Role : Fullstack Developer Work Location:- Pune Experience:- 5+ years Job Description in details: Job Summary Role : Fullstack Developer Experience : 5 to 8 Years Job Location : Pune As a Fullstack Developer specializing in generative AI and cloud technologies, you will design, build, and maintain end-to-end applications on AWS. Youll leverage services such as Bedrock, SageMaker, LangChain and Amplify to integrate AI/ML capabilities, architect scalable infrastructure, and deliver seamless front-end experiences using React. Youll partner with UX/UI designers, ML engineers, DevOps teams, and product stakeholders to take features from concept through production deployment. Job Description: 5+ years of professional experience as a Fullstack Developer building scalable web applications. Proficiency in Python and/or JavaScript/TypeScript; strong command of modern frameworks (React, Node.js). Hands-on AWS expertise: Bedrock, SageMaker, Amplify, Lambda, API Gateway, DynamoDB/RDS, CloudWatch, IAM, VPC. Architect & develop full-stack solutions using React for front-end, Python/Node.js for back-end, and AWS Lambda/API Gateway or containers for serverless services. Integrate Generative AI capabilities leveraging AWS Bedrock, LangChain retrieval-augmented pipelines, and custom prompt engineering to power intelligent assistants and data-driven insights. Design & Manage AWS Infrastructure using CDK/CloudFormation for VPCs, IAM policies, S3, DynamoDB/RDS, ECS/EKS, and Implement DevOps/MLOps Workflows: establish CI/CD pipelines (CodePipeline, CodeBuild, Jenkins), containerization (Docker), automated testing, and rollout strategies. Develop Interactive UIs in React: translate Figma/Sketch designs into responsive components, integrate with backend APIs, and harness AWS Amplify for accelerated feature delivery. Solid understanding of AI/ML concepts, including prompt engineering, generative AI frameworks (LangChain), and model deployment patterns. Experience designing and consuming APIs: RESTful and GraphQL. DevOps/MLOps skills: CI/CD pipeline creation, containerization (Docker), orchestration (ECS/EKS), infrastructure as code. Cloud architecture know-how: security groups, network segmentation, high-availability patterns, cost optimization. Excellent problem-solving ability and strong communication skills to collaborate effectively across distributed teams. Share your updated profiles on shakambnari.nayak@intelliswift.com with details.

Posted 1 week ago

Apply

2.0 - 7.0 years

5 - 15 Lacs

Chennai

Work from Office

About the Role We are seeking a proactive and experienced DevOps Engineer to manage and scale our new AWS-based cloud architecture. This role is central to building a secure, fault-tolerant, and highly available environment that supports our Sun, Drive, and Comm platforms. You'll play a critical role in automation, deployment pipelines, monitoring, and cloud cost optimization. Key Responsibilities Design, implement, and manage infrastructure using AWS services across multiple Availability Zones. Maintain and scale EC2 Auto Scaling Groups, ALBs, and secure networking layers (VPC, public/private subnets). Manage API Gateways, Bastion Hosts, and secure SSH/VPN access for developers and administrators. Setup and optimize Aurora SQL Clusters with multi-AZ active-active failover and backup strategies. Implement and maintain observability using CloudWatch for centralized logging, metrics, and alarms. Enforce infrastructure-as-code practices using Terraform/CloudFormation. Configure and maintain CI/CD pipelines (e.g., GitHub Actions, Jenkins, or CodePipeline). Ensure backup lifecycle management using S3 tiering and retention policies. Collaborate with engineering teams to enable DevSecOps best practices and drive automation. Continuously optimize infrastructure for performance, resilience, and cost (e.g., Savings Plans, S3 lifecycle policies). Must-Have Skills Strong hands-on experience with AWS core services: EC2(Linux and Windows), ALB, VPC, S3, Aurora (MySQL/PostgreSQL), CloudWatch, API Gateway, IAM, VPN. Deep understanding of multi-AZ, high availability, and auto-healing architectures. Experience with CI/CD tools and scripting (Bash, Python, or Shell). Working knowledge of networking and cloud security best practices (Security Groups, NACLs, IAM roles). Experience with Bastion architecture, Client VPNs, Route 53 and VPC peering. Familiarity with backup/restore strategies and monitoring/logging pipelines. Good-to-Have Exposure to containerization (Docker/ECS/EKS) or future readiness for CloudFront/ElastiCache integration. Knowledge of cost management strategies on AWS (e.g., billing reports, Trusted Advisor). Why Join Us? Work on a mission-critical mobility platform with a growing user base. Be a key part of transforming our legacy systems into a modern, scalable infrastructure. Collaborative and fast-paced environment with real ownership. Opportunity to drive automation and shape future DevSecOps practices.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Thoucentric, the Consulting arm of Xoriant, a renowned digital engineering services company with 5000 employees, is looking for a skilled Integration Consultant with 5 to 6 years of experience to join their team. As a part of the Consulting business of Xoriant, you will be involved in Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning, and Execution across various functional areas such as Supply Chain, Finance & HR, Sales & Distribution in the US, UK, Singapore, and Australia. Your role will involve designing, building, and maintaining data pipelines and ETL workflows using tools like AWS Glue, CloudWatch, PySpark, APIs, SQL, and Python. You will be responsible for creating and optimizing scalable data pipelines, developing ETL workflows, analyzing and processing data, monitoring pipeline health, integrating APIs, and collaborating with cross-functional teams to provide effective solutions. **Key Responsibilities** - **Pipeline Creation and Maintenance:** Design, develop, and deploy scalable data pipelines ensuring data accuracy and integrity. - **ETL Development:** Create ETL workflows using AWS Glue and PySpark adhering to data governance and security standards. - **Data Analysis and Processing:** Write efficient SQL queries and develop Python scripts for data tasks automation. - **Monitoring and Troubleshooting:** Utilize AWS CloudWatch to monitor pipeline performance and resolve issues promptly. - **API Integration:** Integrate and manage APIs for connecting external data sources and services. - **Collaboration:** Work closely with cross-functional teams to understand data requirements and communicate effectively with stakeholders. **Required Skills and Qualifications** - **Experience:** 5-6 Years - **o9 solutions platform exp is Mandatory** - Strong expertise in AWS Glue, CloudWatch, PySpark, Python, and SQL. - Hands-on experience in API integration, ETL processes, and pipeline creation. - Strong analytical and problem-solving skills. - Familiarity with data security and governance best practices. **Preferred Skills** - Knowledge of other AWS services such as S3, EC2, Lambda, or Redshift. - Experience with Pyspark, API, SQL Optimization, Python. - Exposure to data visualization tools or frameworks. **Education:** - Bachelors degree in computer science, Information Technology, or a related field. In this role at Thoucentric, you will have the opportunity to define your career path, work in a dynamic consulting environment, collaborate with Fortune 500 companies and startups, and be part of a supportive working environment that encourages personal development. Join us in the exciting growth story of Thoucentric in Bangalore, India. (Posting Date: 05/22/2025),

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Ahmedabad

Work from Office

We are seeking a certified and experienced AWS & Linux Administrator to support the infrastructure of SAP ECC systems running on Oracle databases in AWS. The role demands expertise in AWS services, enterprise Linux (RHEL/SLES), and experience supporting SAP ECC and Oracle in mission-critical environments. This position will have a work shift that aligns with US Pacific Time, which is 12:30 hours behind India Time. Key Responsibilities Deploy, configure, and maintain Linux servers (RHEL/SLES) on AWS EC2 for SAP ECC and Oracle. Administer and monitor SAP ECC infrastructure and Oracle DB back-end, ensuring high availability and performance. Design and manage AWS infrastructure using EC2, EBS, VPC, IAM, S3, CloudWatch, and Backup services. Collaborate with SAP Basis and Oracle DBA teams to manage system patching, tuning, and upgrades. Implement backup and disaster recovery strategies for SAP and Oracle in AWS. Automate routine tasks using Shell scripts, Ansible, or AWS Systems Manager. Ensure security, compliance, and system hardening of SAP ECC and Oracle landscape. Support system refreshes, migrations, and environment cloning. Troubleshoot infrastructure-related incidents affecting SAP or Oracle availability. Minimum Qualifications: AWS Certified SysOps Administrator Associate (or higher AWS certification). Linux Certification: Red Hat RHCSA/RHCE or SUSE Certified Administrator. 5+ years experience managing Linux systems in enterprise or cloud environments. 3+ years of hands-on AWS infrastructure administration. Solid understanding of Oracle DB administration basics in SAP contexts (e.g., listener setup, tablespaces, logs). Preferred Skills Knowledge of SAP ECC on Oracle deployment architecture. Experience managing Oracle on AWS using EC2 Familiarity with SAP Notes, SAP EarlyWatch reports, and SAP/Oracle performance tuning. Understanding of hybrid connectivity, such as VPN/Direct Connect between on-prem and AWS. Hands-on with AWS CloudFormation, Terraform, or automation pipelines for infrastructure deployment. Soft Skills Analytical thinking with attention to root-cause analysis. Strong communication and documentation skills. Ability to coordinate across SAP, DBA, and DevOps teams. Flexibility to provide off-hours support as required.

Posted 1 week ago

Apply

2.0 - 5.0 years

0 - 0 Lacs

Nagpur

Remote

Key Responsibilities: Provision and manage GPU-based EC2 instances for training and inference workloads. Configure and maintain EBS volumes and Amazon S3 buckets (versioning, lifecycle policies, multipart uploads) to handle large video and image datasets . Build, containerize, and deploy ML workloads using Docker and push images to ECR . Manage container deployment using Lambda , ECS , or AWS Batch for video inference jobs. Monitor and optimize cloud infrastructure using CloudWatch, Auto Scaling Groups , and Spot Instances to ensure cost efficiency. Set up and enforce IAM roles and permissions for secure access control across services. Collaborate with the AI/ML, annotation, and backend teams to streamline cloud-to-model pipelines. Automate cloud workflows and deployment pipelines using GitHub Actions , Jenkins , or similar CI/CD tools. Maintain logs, alerts, and system metrics for performance tuning and auditing. Required Skills: Cloud & Infrastructure: AWS Services : EC2 (GPU), S3, EBS, ECR, Lambda, Batch, CloudWatch, IAM Data Management : Large file transfer, S3 Multipart Uploads, storage lifecycle configuration, archive policies (Glacier/IA) Security & Access : IAM Policies, Roles, Access Keys, VPC (preferred) DevOps & Automation: Tools : Docker, GitHub Actions, Jenkins, Terraform (bonus) Scripting : Python, Shell scripting for automation & monitoring CI/CD : Experience in building and managing pipelines for model and API deployments ML/AI Environment Understanding: Familiarity with GPU-based ML workloads Knowledge of model training, inference architecture (batch and real-time) Experience with containerized ML model execution is a plus Preferred Qualifications: 2-5 years of experience in DevOps or Cloud Infrastructure roles AWS Associate/Professional Certification (DevOps/Architect) is a plus Experience in managing data-heavy pipelines , such as drones, surveillance, or video AI systems

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this role should possess the following technical skills: - Proficiency in Java/J2EE, Spring/Spring Boot/Quarkus Frameworks, Microservices, Angular, Oracle, PostgreSQL, MongoDB - Experience with AWS services such as S3, Lambda, EC2, EKS, CloudWatch - Familiarity with Event Streaming using Kafka, Docker, and Kubernetes - Knowledge of GitHub and experience with CI/CD Pipeline In addition to the above, it would be beneficial for the candidate to also have the following technical skills: - Hands-on experience with cloud platforms like AWS, Azure, or GCP - Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI/CD - Familiarity with monitoring and logging tools such as Prometheus and Grafana Overall, the successful candidate will be someone with a strong technical background in various technologies and platforms, along with the ability to adapt to new tools and frameworks as needed.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Manager at Autodesk, you will lead the BI and Data Engineering Team to develop and implement business intelligence solutions. Your role is crucial in empowering decision-makers through trusted data assets and scalable self-serve analytics. You will oversee the design, development, and maintenance of data pipelines, databases, and BI tools to support data-driven decision-making across the CTS organization. Reporting to the leader of the CTS Business Effectiveness department, you will collaborate with stakeholders to define data requirements and objectives. Your responsibilities will include leading and managing a team of data engineers and BI developers, fostering a collaborative team culture, managing data warehouse plans, ensuring data quality, and delivering impactful dashboards and data visualizations. You will also collaborate with stakeholders to translate technical designs into business-appropriate representations, analyze business needs, and create data tools for analytics and BI teams. Staying up to date with data engineering best practices and technologies is essential to ensure the company remains ahead of the industry. To qualify for this role, you should have 3 to 5 years of experience managing data teams and a BA/BS in Data Science, Computer Science, Statistics, Mathematics, or a related field. Proficiency in Snowflake, Python, SQL, Airflow, Git, and big data environments like Hive, Spark, and Presto is required. Experience with workflow management, data transformation tools, and version control systems is preferred. Additionally, familiarity with Power BI, AWS environment, Salesforce, and remote team collaboration is advantageous. The ideal candidate is a data ninja and leader who can derive insights from disparate datasets, understand Customer Success, tell compelling stories using data, and engage business leaders effectively. At Autodesk, we are committed to creating a culture where everyone can thrive and realize their potential. Our values and ways of working help our people succeed, leading to better outcomes for our customers. If you are passionate about shaping the future and making a meaningful impact, join us in our mission to turn innovative ideas into reality. Autodesk offers a competitive compensation package based on experience and location. In addition to base salaries, we provide discretionary annual cash bonuses, commissions, stock grants, and a comprehensive benefits package. If you are interested in a sales career at Autodesk or want to learn more about our commitment to diversity and belonging, please visit our website for more information.,

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities JD for Java + React + AWS Experience: 6 - 10 years Required Skills: Java, Spring, Spring Boot, React, microservices, JMS, ActiveMQ, Tomcat, Maven, GitHub, Jenkins, Linux/Unix, Oracle and PL/SQL, AWS EC2, S3, API Gateway, Lambda, Route53, Secrets Manager, CloudWatch Nice to have skills: Experience with rewriting legacy Java applications using Spring Boot & React Building serverless applications Ocean Shipping domain knowledge AWS CodePipeline Responsibilities: Develop and implement front-end and back-end solutions using Java, Spring, Spring Boot, React, microservices, Oracle and PL/SQL and AWS services. Experience working with business users in defining processes and translating those to technical specifications. Design and develop user-friendly interfaces and ensure seamless integration between front-end and back-end components. Write efficient code following best practices and coding standards. Perform thorough testing and debugging of applications to ensure high-quality deliverables. Optimize application performance and scalability through performance tuning and code optimization techniques. Integrate third-party APIs and services to enhance application functionality. Build serverless applications Deploy applications in AWS environment Perform Code Reviews. Pick up production support engineer role when needed Excellent grasp of application security concerns and remediation techniques. Well-rounded technical background in current web and micro-service technologies. Responsible for being an expert resource for architects in the development of target architectures to ensure that they can be properly designed and implemented through best practices. Should be able to work in a fast paced environment. Stay updated with the latest industry trends and emerging technologies to continuously improve skills and knowledge.

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Bengaluru

Work from Office

8+ Years of exp in Database Technologies: AWS Aurora-PostgreSQL, NoSQL,DynamoDB, MongoDB,Erwin data modeling Exp in pg_stat_statements, Query Execution Plans Exp in Apache Kafka,AWS Kinesis,Airflow,Talend.AWS Exp in CloudWatch,Prometheus,Grafana, Required Candidate profile Exp in GDPR, SOC2, Role-Based Access Control (RBAC), Encryption Standards. Exp in AWS Multi-AZ, Read Replicas, Failover Strategies, Backup Automation. Exp in Erwin, Lucidchart, Confluence, JIRA.

Posted 1 week ago

Apply

8.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

8+ years of hands-on experience with Java, including support and maintenance of legacy codebases. Strong familiarity with AWS services, especially EC2, RDS, S3, CloudWatch, and IAM.

Posted 1 week ago

Apply

6.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

You have a job opportunity for a Back End Engineer position requiring 6-14 years of experience in technologies such as Java, Springboot, Microservices, Python, AWS or Cloud Native Deployment, Event bridge, Api gateway, DynamoDb, and CloudWatch. The ideal candidate should have at least 7 years of experience in these technologies and be comfortable working with complex code and requirements. The essential functions of this position include working with a Tech Stack that includes Java, Springboot, Microservices, Python, AWS, Event bridge, Api gateway, DynamoDb, and CloudWatch. The qualifications required for this role include expertise in Spring boot (Annotations, Autowiring with reflection, spring starters, auto-configuration vs configuration), CI CD Tools, Gradle or Maven Knowledge, Docker, containers, scale up and scale down, Health checks, Distributed Tracing, exception handling in microservices, Lambda expressions, threads, and streams. Candidates with knowledge of GraphQL, prior experience working on projects with a lot of PII data, or experience in the Financial Services industry are preferred. The job offers an opportunity to work on bleeding-edge projects, collaborate with a highly motivated team, competitive salary, flexible schedule, benefits package including medical insurance, sports, corporate social events, professional development opportunities, and a well-equipped office. Grid Dynamics (NASDAQ: GDYN) is the company offering this job opportunity. They are a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. With a focus on solving technical challenges and enabling positive business outcomes for enterprise companies undergoing business transformation, Grid Dynamics has expertise in enterprise AI, data, analytics, cloud & DevOps, application modernization, and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.,

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad

Hybrid

implement , maintenance, refactor optimize the code design of Java JEE applicationsStrong familiarity AWS services, especially EC2 RDS S3, CloudWatch,IAM version control (Git) Java AWSComfortable analyzing logs metrics CI/CD Jenkins GitHub Actions

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

As an AWS Cloud Engineer at our company based in Kerala, you will play a crucial role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure solutions on AWS. Your primary responsibility will be to collaborate closely with developers, DevOps engineers, and security teams to support cloud-native applications and business services. Your key responsibilities will include designing, deploying, and maintaining cloud infrastructure using various AWS services such as EC2, S3, RDS, Lambda, and VPC. Additionally, you will be tasked with building and managing CI/CD pipelines, automating infrastructure provisioning using tools like Terraform or AWS CloudFormation, and monitoring and optimizing cloud resources through CloudWatch, CloudTrail, and other third-party tools. Furthermore, you will be responsible for managing user permissions and security policies using IAM, ensuring compliance, implementing backup and disaster recovery plans, troubleshooting infrastructure issues, and responding to incidents promptly. It is essential that you stay updated with AWS best practices and new service releases to enhance our overall cloud infrastructure. To be successful in this role, you should possess a minimum of 3 years of hands-on experience with AWS cloud services, a solid understanding of networking, security, and Linux system administration, as well as experience with DevOps practices and Infrastructure as Code (IaC). Proficiency in scripting languages such as Python and Bash, familiarity with containerization tools like Docker and Kubernetes (EKS preferred), and holding an AWS Certification (e.g., AWS Solutions Architect Associate or higher) would be advantageous. It would be considered a plus if you have experience with multi-account AWS environments, exposure to serverless architecture (Lambda, API Gateway, Step Functions), familiarity with cost optimization, and the Well-Architected Framework. Any previous experience in a fast-paced startup or SaaS environment would also be beneficial. Your expertise in AWS CloudFormation, Kubernetes (EKS), AWS services (EC2, S3, RDS, Lambda, VPC), cloudtrail, cloud, scripting (Python, Bash), CI/CD pipelines, CloudWatch, Docker, IAM, Terraform, and other cloud services will be invaluable in fulfilling the responsibilities of this role effectively.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Are you at an early stage of your career and looking to work in a practical domain Do you have a desire to support scientists, researchers, students, and more by making relevant data accessible to them If so, Elsevier is looking for individuals who bring accountability, innovation, and a strong sense of ownership to their work. We are committed to solving the world's most pressing information challenges and hiring individuals who truly care about delivering impactful solutions that benefit global research, healthcare, and education. We are currently seeking a Senior Quality Engineer (SDET) who embodies our values of empowerment, collaboration, and continuous improvement. In this role, you will take end-to-end responsibility for product quality and serve as a Quality Engineering Subject Matter Expert. Your primary focus will be ensuring that our products meet the highest standards through strong technical practices, innovation, and proactive ownership. As a Senior Quality Engineer, you will be responsible for developing and executing performance and automation tests. You will work closely with management to enhance quality and process standards, plan and execute effective test approaches, and ensure the on-time and efficient delivery of high-quality software products and/or data. This role requires an intermediate understanding of QA testing, including different testing methodologies and both legacy and new innovation/acquisition products. Your responsibilities will include putting customers first by delivering solutions with reliability, performance, and usability at their core. You will own the quality lifecycle of product features, collaborate with cross-functional teams, drive test strategy, automation, and continuous validation across UI, API, and data layers, establish actionable metrics and feedback loops, champion the use of smart automation tools, mentor and uplift the quality engineering team, and drive continuous improvement through retrospectives and team ceremonies. To qualify for this role, you should have a Bachelor's degree in Computer Science, any Engineering, or a related field, along with 6+ years of experience in software testing and automation in agile environments. You should have a deep understanding of testing strategies, QA patterns, and cloud-native application architectures, as well as hands-on knowledge of programming languages such as JavaScript, Python, or Java. Experience with UI automation frameworks, observability tools, API testing tools, CI/CD tools, and performance/load testing tools is also required. At Elsevier, we promote a healthy work/life balance and provide various well-being initiatives to help you meet both your immediate responsibilities and long-term goals. We offer comprehensive benefits to support your health and well-being, including health insurance, life insurance, flexible working arrangements, employee assistance programs, medical screenings, family benefits, paid time-off options, and more. Join us at Elsevier, a global leader in information and analytics, where your work contributes to addressing the world's grand challenges and fostering a more sustainable future. We harness innovative technologies to support science and healthcare, partnering for a better world.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 2 to 4 years of experience in Python scripting. Additionally, experience in SQL Athena in AWS for querying and analyzing large datasets is required. You should be proficient in SQL programming using Microsoft SQL Server and have the ability to create complex stored procedures, triggers, functions, views, etc. Experience in Crystal Reports and Jasper Reports would be an added advantage. Knowledge in AWS Lambda/CloudWatch event is a plus. The ideal candidate should be able to work both independently and collaboratively with teams. Good communication skills are essential for this role. Knowledge of ASP.NET and GitHub would be considered a plus. Qualifications: - UG: B.Tech / B.E. in Any Specialization - PG: MCA in Any Specialization Experience: 2 to 4 years,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies