Home
Jobs
Companies
Resume

16 Aws Ec2 Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad

Remote

Naukri logo

Role & responsibilities 1. Prepare Helm charts and package applications for deployment. 2. Create manifests and database tunnels for seamless development and testing. 3. Create and maintain development support tools and CI/CD pipelines for multiple projects. 4. Understand the product and create dependency maps to ensure smooth project workflows. 5. Maintain and optimize DevOps tools, including GitLab on-premises and GitPods. 6. Support and configure container registries, code scanners, and code reporting tools. 7. Integrate and execute testing processes within CI/CD pipelines. 8. Utilize Terraform for infrastructure provisioning and management. Operational 9. Gain expertise in databases, including backups, restores, high availability, and failover strategies. 10. Implement least privileged access and set up database tunneling for developer access. 11. Ensure comprehensive backups of repositories and branching structures. 12. Demonstrate proficiency in Kubernetes and Docker, with hands-on experience in CRDs, StatefulSets, PVs, PVCs, Docker volumes, and security contexts. 13. Experience with Helm for Kubernetes package management. 14. Utilize Ansible for configuration management. 15. Possess practical knowledge and experience in Infrastructure as Code (IAC), VMware vSphere, Linux, and configuration management. 16. Implement , provision, and monitor a fleet of servers. People 17. Monitor infrastructure using tools such as Prometheus, Grafana, and Alert Manager. 18. Work with logs aggregation systems, write queries, and set up log shipping. 19. Have hands-on experience with Python and Bash scripting to automate routine tasks. 20. Use practical knowledge of CNCF incubated tools like Longhorn, Velero, Kasten, Harbor, and Rancher to build and maintain private clouds. 21. Implement DevSecOps practices and security tools to enhance the security of infrastructure, network, and storage layers. Preferred candidate profile 1. Bachelor's degree in a related field or equivalent work experience 2. Proficiency with scripting languages (Python, Bash) for automation 3. Excellent understanding of GCP, AWS EC2, LInux, Kubernetes, Docker, Helm, Terraform, Ansible, Jenkins, Gitlab-ci, Gitlab runner, longhorn, k3s, velero backup, minio and other related technologies.

Posted 1 week ago

Apply

12.0 - 17.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

The Role: Sr. Engineer, Database Engineering. The Team: We are looking for a highly self-motivated hands-on Sr. Engineer, Database Engineering who would focus on our database infrastructures estate and automations and DevOps engineering within our enterprise solutions division. The Impact: This is an excellent opportunity to join Enterprise Solutions as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. Whats in it for you: This is the place to hold your existing Database, Infrastructure, DevOps and Leadership skills while having the chance to become exposed to fresh and divergent technologies (e.g. AWS/ Snowflake/ Terraforms/Python/CI/CD) Responsibilities: Team Leadership: Lead and mentor a team of DBAs, fostering a collaborative and high-performance work environment. Assign tasks, manage workloads, and ensure team members meet project deadlines. Conduct performance reviews and identify training needs to enhance technical capabilities. Database Management: Oversee the installation, configuration, and maintenance of SQL Server, Oracle, and other database systems. Manage and optimize databases hosted on **AWS RDS** and **AWS EC2** for performance, scalability, and security. Implement automated backup, restore, and recovery strategies for cloud-based databases. Manage database security policies, ensuring protection against unauthorized access. Performance & Optimization: Monitor database performance and proactively implement tuning strategies. Optimize AWS RDS instances and EC2-hosted databases for cost efficiency and performance. Analyze system logs, resolve issues, and ensure minimal downtime. Project & Change Management: Collaborate with development teams to support database design, deployment, and schema changes. Manage database migrations, upgrades, and patching processes, including AWS services. Incident & Problem Management: Act as an escalation point for critical database issues. Drive root cause analysis for incidents and ensure preventive measures are implemented. Documentation & Compliance: Maintain accurate documentation of database configurations, processes, and recovery procedures. Ensure compliance with data governance, security standards, and AWS best practices. What Were Looking For: Technical Expertise : Proficient in SQL Server, Oracle, AWS RDS, and EC2 database environments. Cloud Knowledge : Strong understanding of AWS database services, including security, scaling, and cost optimization. Leadership Skills : Proven experience managing a DBA team or leading technical projects. Problem-Solving : Strong analytical skills with a proactive approach to troubleshooting. Communication : Excellent verbal and written communication skills for effective collaboration. Certifications : Preferred certifications include AWS Certified Database - Specialty, Microsoft Certified: Azure Database Administrator Associate, Oracle DBA certifications, or equivalent. Experience Requirements: Minimum 12+ years of hands-on DBA experience. At least 2 years of experience in a leadership or team lead role. Experience working with AWS RDS, AWS EC2, and on-premises database environments. Preferred Skills: Experience in PowerShell, T-SQL, and Python for automation. Knowledge of CI/CD pipelines and DevOps practices for database deployments.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

13 - 15 Lacs

Gurugram

Work from Office

Naukri logo

A skilled DevOps Engineer to manage and optimize both on-premises and AWS cloud infrastructure. The ideal candidate will have expertise in DevOps tools, automation, system administration, and CI/CD pipeline management while ensuring security, scalability, and reliability. Key Responsibilities: 1. AWS & On-Premises Solution Architecture: o Design, deploy, and manage scalable, fault-tolerant infrastructure across both on-premises and AWS cloud environments. o Work with AWS services like EC2, IAM, VPC, CloudWatch, GuardDuty, AWS Security Hub, Amazon Inspector, AWS WAF, and Amazon RDS with Multi-AZ. o Configure ASG and implement load balancing techniques such as ALB and NLB. o Optimize cost and performance leveraging Elastic Load Balancing and EFS. o Implement logging and monitoring with CloudWatch, CloudTrail, and on-premises monitoring solutions. 2. DevOps Automation & CI/CD: o Develop and maintain CI/CD pipelines using Jenkins and GitLab for seamless code deployment across cloud and on-premises environments. o Automate infrastructure provisioning using Ansible, and CloudFormation. o Implement CI/CD pipeline setups using GitLab, Maven, Gradle, and deploy on Nginx and Tomcat. o Ensure code quality and coverage using SonarQube. o Monitor and troubleshoot pipelines and infrastructure using Prometheus, Grafana, Nagios, and New Relic. 3. System Administration & Infrastructure Management: o Manage and maintain Linux and Windows systems across cloud and on-premises environments, ensuring timely updates and security patches. o Configure and maintain web/application servers like Apache Tomcat and web servers like Nginx and Node.js. o Implement robust security measures, SSL/TLS configurations, and secure communications. o Configure DNS and SSL certificates. o Maintain and optimize on-premises storage, networking, and compute resources. 4. Collaboration & Documentation: o Collaborate with development, security, and operations teams to optimize deployment and infrastructure processes. o Provide best practices and recommendations for hybrid cloud and on-premises architecture, DevOps, and security. o Document infrastructure designs, security configurations, and disaster recovery plans for both environments. Required Skills & Qualifications: Cloud & On-Premises Expertise: Extensive knowledge of AWS services (EC2, IAM, VPC, RDS, etc.) and experience managing on-premises infrastructure. DevOps Tools: Proficiency in SCM tools (Git, GitLab), CI/CD (Jenkins, GitLab CI/CD), and containerization. Code Quality & Monitoring: Experience with SonarQube, Prometheus, Grafana, Nagios, and New Relic. Operating Systems: Experience managing Linux/Windows servers and working with CentOS, Fedora, Debian, and Windows platforms. Application & Web Servers: Hands-on experience with Apache Tomcat, Nginx, and Node.js. Security & Networking: Expertise in DNS configuration, SSL/TLS implementation, and AWS security services. Soft Skills: Strong problem-solving abilities, effective communication, and proactive learning. Preferred Qualifications: AWS certifications (Solutions Architect, DevOps Engineer) and a bachelors degree in Computer Science or related field. Experience with hybrid cloud environments and on-premises infrastructure automation.

Posted 3 weeks ago

Apply

4 - 6 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for Site Reliability Engineer! Youll make a difference by: SRE L1 Commander is responsible for ensuring the stability, availability, and performance of critical systems and services. As the first line of defense in incident management and monitoring, the role requires real-time response, proactive problem solving, and strong coordination skills to address production issues efficiently. Monitoring and Alerting: Proactively monitor system health, performance, and uptime using monitoring tools like Datadog, Prometheus. Serving as the primary responder for incidents to troubleshoot and resolve issues quickly, ensuring minimal impact on end-users. Accurately categorizing incidents, prioritize them based on severity, and escalate to L2/L3 teams when necessary. Ensuring systems meet Service Level Objectives (SLOs) and maintain uptime as per SLAs. Collaborating with DevOps and L2 teams to automate manual processes for incident response and operational tasks. Performing root cause analysis (RCA) of incidents using log aggregators and observability tools to identify patterns and recurring issues. Following predefined runbooks/playbooks to resolve known issues and document fixes for new problems. Youd describe yourself as: Experienced professional with 4 to 6 years of relevant experience in SRE, DevOps, or Production Support with monitoring tools (e.g., Prometheus, Datadog). Working knowledge of Linux/Unix operating systems and basic scripting skills (Python, Gitlab actions) cloud platforms (AWS, Azure, or GCP). Familiarity with container orchestration (Kubernetes, Docker, Helmcharts) and CI/CD pipelines. Exposure with ArgoCD for implementing GitOps workflows and automated deployments for containerized applications. Possessing experience in Monitoring: Datadog, Infrastructure: AWS EC2, Lambda, ECS/EKS, RDS, Networking: VPC, Route 53, ELB and Storage: S3, EFS, Glacier. Strong troubleshooting and analytical skills to resolve production incidents effectively. Basic understanding of networking concepts (DNS, Load Balancers, Firewalls). Good communication and interpersonal skills for incident communication and escalation. Having preferred certifications: AWS Certified SysOps Administrator Associate, AWS Certified Solutions Architect Associate or AWS Certified DevOps Engineer Professional

Posted 1 month ago

Apply

7 - 9 years

25 - 32 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred

Posted 1 month ago

Apply

6 - 8 years

18 - 20 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Naukri logo

6+ years of hands on experience with AWS services(Lambda, DynamoDB, SQS, SNS, S3, ECS, EC2) (mandatory in each service) Created lambda functions and done scripting Only deployment experience will not work Hands on Java, Spring Boot, Microservices, Kafka.

Posted 1 month ago

Apply

5 - 9 years

20 - 22 Lacs

Kolkata

Work from Office

Naukri logo

Design and architect scalable, high-performance, and secure systems. Collaborate with cross-functional teams to understand requirements and provide technical solutions. Work extensively with cloud platforms, primarily AWS, to design and implement cloud-native solutions. Architect and manage both RDBMS (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, DynamoDB) databases. Optimize application performance through CDN and caching services (e.g., CloudFront, Redis, Memcached). Stay hands-on with the latest technologies and architectural patterns to drive innovation and efficiency by creating POCs. Create system-level designs, ensuring robust integration and scalability. Assist and mentor development teams, providing guidance and troubleshooting complex issues. Ensure best practices in code quality, security, and system performance. Collaborate with stakeholders to define project scope, timelines, and deliverables. Tracking the progress of work on a day to day basis and providing solutions to developers when they are stuck with their work.

Posted 2 months ago

Apply

4 - 6 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for Site Reliability Engineer! Youll make a difference by: SRE L1 Commander is responsible for ensuring the stability, availability, and performance of critical systems and services. As the first line of defense in incident management and monitoring, the role requires real-time response, proactive problem solving, and strong coordination skills to address production issues efficiently. Monitoring and Alerting: Proactively monitor system health, performance, and uptime using monitoring tools like Datadog, Prometheus. Serving as the primary responder for incidents to troubleshoot and resolve issues quickly, ensuring minimal impact on end-users. Accurately categorizing incidents, prioritize them based on severity, and escalate to L2/L3 teams when necessary. Ensuring systems meet Service Level Objectives (SLOs) and maintain uptime as per SLAs. Collaborating with DevOps and L2 teams to automate manual processes for incident response and operational tasks. Performing root cause analysis (RCA) of incidents using log aggregators and observability tools to identify patterns and recurring issues. Following predefined runbooks/playbooks to resolve known issues and document fixes for new problems. Youd describe yourself as: Experienced professional with 4 to 6 years of relevant experience in SRE, DevOps, or Production Support with monitoring tools (e.g., Prometheus, Datadog). Working knowledge of Linux/Unix operating systems and basic scripting skills (Python, Gitlab actions) cloud platforms (AWS, Azure, or GCP). Familiarity with container orchestration (Kubernetes, Docker, Helmcharts) and CI/CD pipelines. Exposure with ArgoCD for implementing GitOps workflows and automated deployments for containerized applications. Possessing experience in Monitoring: Datadog, Infrastructure: AWS EC2, Lambda, ECS/EKS, RDS, Networking: VPC, Route 53, ELB and Storage: S3, EFS, Glacier. Strong troubleshooting and analytical skills to resolve production incidents effectively. Basic understanding of networking concepts (DNS, Load Balancers, Firewalls). Good communication and interpersonal skills for incident communication and escalation. Having preferred certifications: AWS Certified SysOps Administrator Associate, AWS Certified Solutions Architect Associate or AWS Certified DevOps Engineer Professional

Posted 2 months ago

Apply

2 - 4 years

9 - 19 Lacs

Ahmedabad

Work from Office

Naukri logo

ThreatModeler Software Inc. is industrys #1 automated Threat modeling platform. Our patented technology enables intuitive, automated, collaborative threat modeling and integrates directly into every component of your DevSecOps tool chain, automating the Sec in DevSecOps from design to code to cloud at scale. ThreatModelers SaaS platform ensures secure and compliant applications, infrastructure, and cloud assets in design, saving millions in incident response costs, remediation costs and regulatory fines. It is trusted by software, security and cloud architects, engineers, and developers at companies across the world. Founded in 2010, ThreatModeler is headquartered in Jersey City, NJ The ideal candidate will be responsible for configuring and troubleshooting our product to resolve our customers' technical issues. You will support the customer by acting as the liaison between the customer and other internal teams. Your ability to work in a complex networking environment will The ideal candidate will be responsible for configuring and troubleshooting our product to resolve our customers' technical issues. You will support the customer by acting as the liaison between the customer and other internal teams. Your ability to work in complex networking environment will also make you an ideal candidate. As a Technical Product Support Engineer, you will be responsible for the deployment and support of ThreatModeler web application that helps companies to built threat models of their internal as well as external applications. Our technology stack is built in AWS, Angular JS, Microsoft IIS, Microssoft .NET Core, SQL server. We develop and deploy on AWS EC2 and AWS RDS instances. You will be responsible for service delivery, reliability, and monitoring of all the current internal and client facing ThreatModeler application Instances. Responsibilities Deploy and maintain software on AWS EC2. Deploy and maintain database on SQL server 2014,2016,2017 and 2019 using AWS RDS service. Deploy software on clients on-premises infrastructure. Test and deploy upgrades, patches and maintain artifacts of multiple environments such as Development, Staging, Test and Production environments of software. Provide Application Support and work collaboratively with Development team to provide right solutions to customers. Sustain and improve the process of knowledge sharing throughout the company as well as clients by designing and developing ThreatModeler support documents with new software release. Maintain licensing of software for customer and internal application using Cryptolicensing software. Maintaining & Logging of Fresh desk, a cloud-based customer service software for managing customer queries using ticketing system. Iterate on best practices to increase the quality & velocity of deployment Provide support for integration of Single sign on services such as Okta, Azure Active directory, Active directory Federation services, Active Directory etc. in Software. Provide support for Third party integration services in software such as AWS, Azure portal, Jenkins, JIRA, Azure pipelines, Azure boards. Provide Threat Framework information using SQL to ThreatModelers Threat Research Center based on requirement. Move quickly and intelligently - seeing technical debt as your nemesis Participation in 24/7 support for high availability of the software to the clients. Reproduce customer environment and run tests Manage and address electronic tickets efficiently Liaise between sales team, customer success team, and customers to properly address customer problems Troubleshoot and configure software and hardware. Qualifications Bachelors or masters degree in Computer Science or related field Demonstrated experience of Microsoft IIS web server. Understanding of system administration in Microsoft environments An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail Understanding of REST APIs. Experience with delivery of a SaaS product Understanding of AWS Infrastructure and its services such as AWS EC2, AWS RDS etc. Strong communication and documentation skills Excellent judgment, analytical thinking, and problem-solving skills Full understanding of software development lifecycle best practices Self-motivated individual that possesses excellent time management and organizational skills

Posted 2 months ago

Apply

7 - 9 years

27 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking experienced Data Engineers with over 7 years of experience to join our team at Intuit. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 7+ years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.

Posted 2 months ago

Apply

6 - 11 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 This Senior Software Developer position will be focused on developing software related to our enterprise feed platforms. The candidate must have demonstrable experience with Java and database technologies and experience in the area of software development. The role requires design, development, testing, and support of platforms. Responsibilities Be part of an agile team that designs, develops, and maintains the enterprise feed systems and other related software applications Participate in design sessions for new product features and capabilities Produce technical design documents and participate technical walkthroughs. Engineer components and common services based on standard corporate development models, languages and tools. Collaborate effectively with technical and non-technical stakeholders. Support and maintain production environments Be part of an agile team that designs, develops, and maintains the enterprise feed systemsand other related software applications Requirements Bachelor's/PG degree in Computer Science, Information Systems or equivalent. A minimum of 6 + years of strong experience in application development using oracle or Microsoft technologies Proficient with software development lifecycle (SDLC) methodologies like Agile , Scrum, Test-driven development . Strong command of essential technologies: SQL , AWS EC2, S3, RDS, Redshift, AWS Lambda and Step functions, Airflow, Terraform , Python/ Java, T-sql, PL/SQL, Good experience with developing solutions involving relational database technologies on SQL Server and/or Oracle Platforms. Excellent verbal and written communication skills

Posted 3 months ago

Apply

5 - 10 years

22 - 35 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

We are looking for 10 Software Engineer IIs (Data) to play a key role in delivering software products within a high-impact engineering team. You will collaborate with Product Managers, User Experience Designers, Architects, and other team members to modernize and build products aligned with the product teams vision and strategy. These products will leverage multi-cloud platforms, human-centered design, Agile, and DevOps principles to deliver industry-leading solutions at high velocity and exceptional quality. Youll be working alongside talented software engineering professionals, leading by example, mentoring others, and thriving in a fast-paced environment by embracing inclusive behaviors, demonstrating attention to detail, and navigating ambiguity. Team Overview The Data and AI enablement teams focus on accelerating end-to-end data and AI adoption across advanced analytics, product teams, and sales and operations functions. The broader team is responsible for data strategy, availability, and adoption via self-service tools, including AI model registry, activation of models for business, multichannel testing, and customer relationship measurement all in alignment with Responsible AI principles. The Sales Hub team supports seamless execution and distribution of sales transactions. It plays a pivotal role in delivering accurate data promptly, enabling better inventory management, financial planning, sales performance improvement, customer engagement, risk mitigation, and competitive advantage. Key Responsibilities Contribute to the delivery of complex solutions by breaking down big problems into smaller pieces Actively participate in team planning activities Ensure quality and integrity of the SDLC and identify opportunities to improve team practices through recommended tools and methods Triage complex issues independently Stay informed of the technology landscape and help plan delivery of broad business needs across multiple applications Set a consistent example of agile development practices and coach peers on cross-functional collaboration Mentor junior engineers and help new hires ramp up Contribute to and enhance internal libraries and tools Understand the business domain supported by your applications Proactively communicate status and issues to leadership Identify risks and challenges in your work and team deliverables Collaborate across teams to solve customer-centric problems creatively Show commitment to critical delivery deadlines Basic Qualifications 3+ years of relevant professional experience with a bachelors degree OR equivalent At least 1+ years in cloud engineering and architecture with AWS services (e.g., EC2, Lambda, S3, RDS, API Gateway, etc.) 2+ years of experience in microservices architecture using Java, Kafka, and NoSQL databases 2+ years of Java and Spring Boot development with a strong foundation in software engineering principles Experience building and deploying microservices Familiarity with CI/CD pipelines and tools such as Jenkins, GitLab CI, or AWS CodePipeline Ability to understand business problems and apply engineering design principles to solve for scalability, performance, and security Proven experience working directly with engineers, product managers, and stakeholders Strong communication and collaboration skills Preferred Qualifications Experience in omni-channel retail or sales environments Familiarity with Docker containerization and automated testing practices Passion for keeping up with new technologies and industry trends Proactive learning mindset and knowledge-sharing attitude Experience with monitoring/logging tools like Prometheus, Grafana, ELK stack, or AWS CloudWatch Understanding of security best practices in cloud environments Technical Skills (Tools, Technologies, Frameworks, Platforms) Programming Languages & Frameworks: Spring Boot Java Cloud Platforms & Services (AWS-focused): AWS EC2 AWS Lambda AWS S3 AWS RDS AWS API Gateway AWS CodePipeline AWS CloudWatch Microservices Architecture: Microservices Design & Development Service-to-service communication API design and implementation Databases: NoSQL Databases Data Streaming / Messaging Systems: Kafka DevOps & CI/CD Tools: Jenkins GitLab CI CI/CD Pipelines (general understanding and implementation) Containerization & Virtualization: Docker Monitoring and Logging Tools: Prometheus Grafana ELK Stack (Elasticsearch, Logstash, Kibana) AWS CloudWatch (mentioned above under AWS) Applied Technical Skills (Practices, Design Principles, Methodologies, etc.) Software Engineering Principles: Modular and scalable architecture Object-oriented programming (OOP) Code quality, maintainability, and reusability best practices Agile Development Methodologies: Scrum/Kanban practices Agile ceremonies (Planning, Standups, Retrospectives) DevOps Practices: Continuous Integration / Continuous Deployment Infrastructure as Code mindset System Design and Architecture: Breaking down complex systems into smaller manageable components Designing solutions that scale and perform under load Cloud Engineering & Architecture Principles: Multi-cloud awareness Designing for resilience, fault-tolerance, and cost-efficiency AI/ML Enablement (Supporting Systems, not core model development): AI model registry integration Model activation pipelines Responsible AI principles (adoption, governance) Data Engineering & Analytics Support: Data availability and strategy design Self-service data tooling enablement Multichannel testing frameworks Customer relationship measurement systems Collaboration & Communication: Working with cross-functional teams (Product, UX, Architecture) Problem-solving with cross-team dependencies Proactive communication and status reporting Quality Engineering: Automated Testing practices Ensuring high-quality software delivery through unit testing, integration testing, and regression testing Security Best Practices (Cloud-focused): Secure deployment patterns Identity and access management in cloud setups

Posted 3 months ago

Apply

6 - 8 years

8 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Collaborate with Mulesoft architect/s and SME/s to establish the technical vision Implement and support Mulesoft high level integration solutions to satisfy validated business needs Design and implement reusable assets, API specifications and components Create and continuously refine development best practice Active participant of requirement grooming sessions Develop and maintain the platform CI/CD Pipeline Conduct peer reviews among the team and ensure code quality Technical Skills 6+ years of Mulesoft Development Experience Certifications - MuleSoft Certified Developer Level 1 (Mandatory), MuleSoft Certified Developer Level 2 (highly desirable) Proficient in API-Lead design (RAML and Swagger) and Rest API implementation Proficient in SSO techniques: OAuth 2.0, OpenID Connect, JWT and SAML Development expertise on Anypoint Studio, Flow Designer, Maven, ActiveMQ, Kafka, JMS, Batch Processing and Audit Logging Proficient in debugging and trouble shooting and particularly performance tuning when it comes to capacity issues Experienced in micro-services architecture and CI/CD techniques Good understanding of Salesforce Platform and integration patterns, especially when it comes to connectivity between Mulesoft and Salesforce Good understanding of AWS EC2, Lambda, AWS MSK and Splunk.

Posted 3 months ago

Apply

3 - 5 years

12 - 14 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

We are looking for a highly skilled and motivated Data Engineer to join our dynamic team. In this role, you will collaborate with cross-functional teams to design, build, and maintain scalable data platforms on the AWS Cloud. Youll play a key role in developing next-generation data solutions and optimizing current implementations. Key Responsibilities: Build and maintain high-performance data pipelines using AWS Glue, EMR, Databricks, and Spark. Design and implement robust ETL processes to integrate and analyze large datasets. Develop and optimize data models for reporting, analytics, and machine learning workflows. Use Python, PySpark, and SQL for data transformation and optimization. Ensure data governance, security, and performance on AWS Cloud platforms. Collaborate with stakeholders to translate business needs into technical solutions. Required Skills & Experience: 3-5 years of hands-on experience in data engineering. Proficiency in Python, SQL, and PySpark. Strong knowledge of Big Data ecosystems (Hadoop, Hive, Sqoop, HDFS). Expertise in Spark (Spark Core, Spark Streaming, Spark SQL) and Databricks. Experience with AWS services like EMR, Glue, S3, EC2/EKS, and Lambda. Solid understanding of data modeling, warehousing, and ETL processes. Familiarity with data governance, quality, and security principles. Location - Anywhere in india,hyderabad,ahmedabad,pune,chennai,kolkata.

Posted 3 months ago

Apply

3 - 4 years

6 - 8 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

Strong knowledge of PHP web frameworks Understanding the fully synchronous behavior of PHP Understanding of MVC design patterns Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Knowledge of object oriented PHP programming Understanding accessibility and security compliance {{Depending on the specific project}} Strong knowledge of the common PHP or web server exploits and their solutions Understanding fundamental design principles behind a scalable application User authentication and authorization between multiple systems, servers, and environments Integration of multiple data sources and databases into one system Familiarity with limitations of PHP as a platform and its workarounds Creating database schemas that represent and support business processes Familiarity with SQL/NoSQL databases and their declarative query languages Location-Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad,Remote,Delhi NCR

Posted 3 months ago

Apply

5 - 9 years

7 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Oracle DBA AWS RDS DBA,exp in one of PostgreSQL or MySQL and Automation using Shell scripts, Jenkins, Python .Exp in Oracle standalone & RAC database upgradation Manual upgrad DBUA upgradAWS Certified Database Specialty, Oracle Certified DBA, ITIL.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies