Home
Jobs

541 Amazon Ec2 Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Java Full Stack Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Java and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in Java programming, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6+ years of experience in full-stack development, with a strong focus on Java. Java Full Stack Developer Roles & Responsibilities: Develop scalable web applications using Java (Spring Boot) for backend and React/Angular for frontend. Implement RESTful APIs to facilitate communication between frontend and backend. Design and manage databases using MySQL, PostgreSQL, Oracle , or MongoDB . Write complex SQL queries, procedures, and perform database optimization. Build responsive, user-friendly interfaces using HTML, CSS, JavaScript , and frameworks like Bootstrap, React, Angular , NodeJS, Phyton integration Integrate APIs with frontend components. Participate in designing microservices and modular architecture. Apply design patterns and object-oriented programming (OOP) concepts. Write unit and integration tests using JUnit , Mockito , Selenium , or Cypress . Debug and fix bugs across full stack components. Use Git , Jenkins , Docker , Kubernetes for version control, continuous integration , and deployment. Participate in code reviews, automation, and monitoring. Deploy applications on AWS , Azure , or Google Cloud platforms. Use Elastic Beanstalk , EC2 , S3 , or Cloud Run for backend hosting. Work in Agile/Scrum teams, attend daily stand-ups, sprints, retrospectives, and deliver iterative enhancements. Document code, APIs, and configurations. Collaborate with QA, DevOps, Product Owners, and other stakeholders. Must-Have Skills : Java Programming: Deep knowledge of Java language, its ecosystem, and best practices. Frontend Technologies: Proficiency in HTML , CSS , JavaScript , and modern frontend frameworks like React or Angular etc... Backend Development: Expertise in developing and maintaining backend services using Java, Spring, and related technologies. Full Stack Development: Experience in both frontend and backend development, with the ability to work across the entire application stack. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 22 hours ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Noida

Work from Office

Naukri logo

AWS Certified Solutions Architect (with DevOps Expertise) Immediate Start Role Summary iFactFind is seeking a highly skilled and AWS-certified Solutions Architect with strong DevOps expertise to modernise and scale our cloud infrastructure. You will play a pivotal role in architecting a secure, scalable, multi-region SaaS platform while actively supporting deployment, automation, and compliance initiatives. This is a senior-level role ideal for someone who thrives in a high-impact, collaborative, and fast-paced environment. Key Responsibilities Architecture & Infrastructure Design Audit and optimise our existing AWS infrastructure (EC2, RDS, S3, CloudFront, IAM, etc.) Architect and implement a secure, scalable multi-region SaaS environment Maintain clear separation between Dev, Staging, UAT, and Production environments Design for high availability, disaster recovery, and fault tolerance Security & Compliance Align infrastructure design with SOC 2 , GDPR , and ISO 27001 standards Implement best practices for identity, access, encryption, and monitoring Act as Information Security Officer supporting compliance initiatives DevOps & Automation Build and optimise CI/CD pipelines using AWS CodePipeline, CodeBuild, and Git Implement Infrastructure as Code (Terraform or CloudFormation) Set up proactive monitoring and alerts using AWS CloudWatch, GuardDuty, and Cost Explorer Automate deployments, rollback mechanisms, and disaster recovery Documentation & Collaboration Maintain clear architectural diagrams and onboarding playbooks Collaborate with development, QA, and product teams Participate in sprint planning and Agile ceremonies Required Skills & Experience AWS Certified Solutions Architect (Professional preferred) 5+ years of hands-on AWS experience in SaaS/cloud-native environments Deep knowledge of VPC design, IAM, EC2, RDS/Aurora, S3, CloudFront Experience with ECS/EKS, Lambda, and API Gateway Proficiency with CI/CD and Infrastructure as Code (Terraform, CloudFormation) Understanding of SOC 2, GDPR, and related compliance frameworks Strong troubleshooting, optimisation, and documentation skills Nice to Have Experience with AWS Config, Security Hub, Inspector Background in startup or high-growth SaaS environments Exposure to cost optimisation tools and FinOps Why Join iFactFind? Shape the infrastructure of a mission-driven, fast-growing SaaS platform Work with a high-trust, collaborative leadership team Enjoy flexibility, ownership, and long-term engagement opportunities Drive meaningful architectural decisions with real-world impact Important : Only apply if you are AWS Certified and available to start immediately or within the next 2 weeks.

Posted 1 day ago

Apply

2.0 - 7.0 years

5 - 8 Lacs

Chennai

Work from Office

Naukri logo

SUMMARY Job Role: AWS Cloud Security Architect Location Chennai Experience 2+ years Job Description We are looking for an AWS Cloud Security Architect with at least 2 years of relevant experience. As a Cloud Platform Engineer, you will be responsible for designing, constructing, testing, and deploying cloud application solutions that integrate both cloud and non-cloud infrastructures. You will collaborate with cross-functional teams to ensure solutions meet performance and security standards, validate architecture through practical testing, and manage infrastructure environments to meet organizational demands. Roles & Responsibilities Work independently and become a subject matter expert. Actively participate in team discussions and provide solutions to work-related problems. Assist in developing best practices for cloud application deployment and management. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills Must-Have Skills: Proficiency in Cloud Security Architecture. Good-To-Have Skills: Experience with AWS CloudFormation, understanding of cloud service models (IaaS, PaaS, SaaS), experience with security frameworks and compliance standards in cloud environments, familiarity with automation tools for infrastructure deployment and management. Additional Information Minimum 2 years of experience in Cloud Security Architecture required. This position is based at our Chennai office. A 15 years full-time education is required. Requirements Requirements: Minimum 2 years of experience in Cloud Security Architecture 15 years full-time education

Posted 1 day ago

Apply

1.0 - 3.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

Overview We are seeking a DevOps Engineer to join the Critical Start Technologies Private Ltd. team, operating under the Critical Start umbrella, for our India operations. The ideal candidate brings 1–3 years of experience, a strong background in AWS and Terraform, and a passion for infrastructure as code. Candidates should be skilled at writing well-structured Terraform modules, proficient in AWS service provisioning, and familiar with best practices for managing IaaS and PaaS environments. Additional experience with Linux administration, GitHub Actions, container orchestration, and monitoring solutions such as CloudWatch or Prometheus is a plus. Your experience includes writing production code and proficiency in understanding and structuring large projects using Terraform modules. You possess a deep understanding of provisioners and are well-versed in remote state management. We value individuals who are proactive, detail-oriented, and passionate about infrastructure as code. Critical Start is committed to building an inclusive, equitable, and respectful workplace, and we welcome candidates from all backgrounds to apply. Responsibilities As a DevOps Engineer, you will play a key role in maintaining, evolving, and enhancing our existing Terraform-based infrastructure. You'll work across a diverse infrastructure stack to support the delivery of new projects and services to our customers. A core part of your responsibilities will be using Terraform to build modular, maintainable, and scalable infrastructure solutions. You will also take initiative in identifying opportunities to improve performance—focusing on responsiveness, availability, and scalability. Establishing effective monitoring and alerting systems will be essential, as will troubleshooting issues within distributed systems, including throughput, resource utilization, and configuration. Our infrastructure stack includes the following components: Terraform: Used for comprehensive infrastructure management. AWS Fargate: Primary platform for hosting most of our applications and services, along with select EC2 instances for specific use cases. Monitoring and alerts: AWS CloudWatch, SNS, New Relic, and Sentry.io support effective monitoring and timely alerting. Storage and databases: S3, Postgres (RDS), Memcached, RabbitMQ, and AWS Elasticsearch Service handle our storage and data processing needs. Networking and security: VPC, Route 53, IAM, ALB/NLB, Security Groups, and Secrets Manager support a secure and resilient networking environment. CI/CD pipeline: Built using EC2 Image Builder, CodeBuild, and GitHub to streamline software delivery and deployment. Qualifications Required Qualifications: 1-3 years of professional experience in a DevOps, Site Reliability Engineer, or Systems Engineering role. Ability to work through ambiguity and uncertainty. You have a solid understanding of CI/CD pipelines, including their purpose and implementation, and hands-on experience setting them up in real-world environments. You bring experience working with Terraform for provisioning using modular approaches. Strong troubleshooting, problem-solving, and collaborative mindset . You hold a Bachelor's degree from a recognized institution or possess equivalent practical experience that demonstrates your technical capabilities. Preferred Qualifications: Shell scripting experience is a strong plus. Strong knowledge of Linux/Unix systems. Familiarity with source control tools, such as Git. Experience with observability tools such as CloudWatch, New Relic, or Sentry.io Proficiency with Docker and practical experience running containers in AWS environments such as EC2 and Fargate.

Posted 1 day ago

Apply

3.0 - 5.0 years

13 - 21 Lacs

Pune

Work from Office

Naukri logo

Overview We are seeking a DevOps Engineer II to join the Critical Start Technologies Private Ltd. team, operating under the Critical Start umbrella, for our India operations. The ideal candidate brings 3-5 years of hands-on experience in cloud-native infrastructure, CI/CD automation, and Infrastructure as Code. You bring advanced skills in AWS and Terraform, a strong understanding of scalable systems, and a mindset geared toward security, resilience, and automation-first practices. The ideal candidate has worked in complex environments with microservices, container orchestration, and multi-account AWS structures. You take pride in building robust DevOps pipelines and actively contribute to architectural and operational decisions. Experience leading small initiatives or mentoring junior engineers is a plus. Responsibilities As a DevOps Engineer II, you will be a technical contributor and enabler for scalable infrastructure delivery and automation practices. Your role involves: Owning and improving the infrastructure codebase : maintaining reusable and modular Terraform configurations, setting standards for code structure, and contributing to design documentation. Building and evolving CI/CD pipelines : designing resilient and secure build/deploy pipelines using GitHub Actions, AWS CodePipeline, or equivalent. Monitoring and Observability : developing dashboards and proactive alerting with CloudWatch, Prometheus, or New Relic to ensure high availability and quick recovery. Infrastructure Security and Compliance : implementing IAM best practices, Secrets Manager, least privilege policies, and conducting periodic audits. Optimizing cloud spend and performance through rightsizing, auto-scaling, and cost monitoring strategies. Collaborating closely with development, QA, and security teams to support full software delivery lifecycle from development through production. Participating in incident response and postmortem analysis. Qualifications Required Qualifications: 3-5 years of professional experience in DevOps, SRE, or Cloud Engineering roles. Advanced Terraform experience, including custom module design, remote state management, and backend locking. Deep knowledge of AWS services (VPC, IAM, ECS/Fargate, EC2, RDS, ALB/NLB, S3, CloudWatch, Secrets Manager, etc.). Strong background in Linux systems administration, including networking and performance tuning. Proven expertise in Docker, ECS/EKS, and secure image lifecycle. Strong scripting and automation skills using Bash, Python, or Go. Experience with GitOps, infrastructure promotion strategies, and artifact management. Familiarity with log aggregation and tracing (e.g., Fluentd, Open Telemetry, Sentry). Exposure to infrastructure testing frameworks (e.g., Terratest, InSpec). Excellent communication and cross-functional collaboration skills. Bachelor’s or Master’s degree in Computer Science or related field. Preferred Qualifications: Additional scripting experience is a strong plus. Knowledge of security and compliance frameworks like SOC2, CIS Benchmarks, or ISO 27001 is a plus. Experience working in regulated environments or with customer-facing infrastructure. Contributions to open-source infrastructure tools or Terraform modules. Exposure to Kubernetes or hybrid cloud platforms. Experience with IaC scanning tools like Checkov, tfsec, or Bridgecrew.

Posted 1 day ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for a seasoned DevOps Engineer to join a 6-month full-time project based in Hyderabad. The ideal candidate will have 7-10 years of experience working with distributed systems and cloud infrastructure. In this role, you will contribute to new feature development, manage CI/CD pipelines, monitor systems, and implement scalable cloud-based solutions. Youll work closely with engineering, security, and compliance teams, leveraging tools like Docker, Kubernetes, Jenkins, and AWS services. Experience with automation tools such as Ansible or Terraform and scripting in Python is an added advantage. Strong communication, documentation, and problem-solving skills are essential. Education- Bachelors Degree in Computer Science, Information Systems, or a related field

Posted 1 day ago

Apply

7.0 - 12.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Armanino is proud to be among the top 20 Largest Firms in the United States of America and one of the Best Places to Work. Armanino (USA) has more than 2500 employees across the USA and more than 20 offices in difference states of the USA. We have a community of resources that are ready and willing to support your ideas, build your skills and expand your professional network. We want you to integrate all aspects of your life with your career. At Armanino, we know you don't check-out of life when you check-in at work. That's why we've created a unique work environment where your passions, work, and family & friends can overlap. We want to help you achieve growth by giving you access to a network of smart and supportive people, willing to listen to your ideas. This open position is for Armanino India LLP, which is located in India. Responsibilities: Work with Program Managers, analysts, consultants, and the development team to identify and implement innovative solutions to business solutions Working with Microsoft Dynamics 365 F&O implementations in a development lead role, as well as technical and functional experience Customize Dynamics 365 for Finance and Supply Chain Build packages & deploy Dynamics 365 F&O objects Contribute to the estimating of engineering designs and implementations Perform thorough and efficient evaluation code reviews. Resolution of technical issues. Design and implement code solutions to increase system optimization, maintainability, scalability, security, testability, and stability Follow standards and practices established by the Software Solutions team to ensure solutions address both functional and non-function requirements in a consistent manner across the organization Test and release software using established standards and practices in collaboration with the Quality Assurance Team : BS/BA degree in IT, Business or related major, or equivalent work experience. Minimum 7 years experience developing and implementing D365 F&O / AX Experience working in management consulting or professional services Experience and programming skills usingX++, C#, .NET, and SQL Familiarity with SOAP, REST, OData Experience with Logic Apps and Azure/Cloud Infrastructure Experience with developing SSRS reports Experience working with Power Platform and Power Bi Previous experience with operational technologies in Supply Chain, Warehousing, Fulfillment, etc. Experience working with Finance, Trade and Logistics, Manufacturing, or Projects Experience with client industries such as, Agriculture, Manufacturing, or Construction Microsoft accreditation in D65 Finance and Operations and related Microsoft technologies Experience working with Power Platform and Power Bi Experience with Unified development environments (UDE) Experience with GIT source control and Azure DevOps (ADO) Compensation and Benefits: CompensationCommensurate with Industry standards Other BenefitsProvident Fund, Gratuity, Medical Insurance, Group Personal Accident Insurance etc. employment benefits depending on the position. "Armanino is the brand name under which Armanino LLP, Armanino CPA Armanino provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Armanino complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall,transfer, leaves of absence, compensation and training. Armanino expressly prohibits any form of workplace harassment based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Improper interference with the ability of Armanino employees to perform their job duties may result in discipline up to and including discharge. We have a community of resources that are ready and willing to support your ideas, build your skills and expand your professional network.

Posted 1 day ago

Apply

0.0 - 1.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Role AWS Cloud Engineer. We are looking for an AWS Cloud Engineer with a strong DevOps and scripting background who can support and optimize cloud infrastructure, automate deployments, and collaborate cross-functionally in a fast-paced fintech environment. Core Responsibilities. Design and implement secure, scalable network and infrastructure solutions using AWS. Deploy applications using EC2, Lambda, Fargate, ECS, and ECR. Automate infrastructure using CloudFormation, scripting (Python/Bash), and AWS SDK. Manage and optimize relational (PostgreSQL, SQL) and NoSQL (MongoDB) databases. Set up and maintain monitoring using Grafana, Prometheus, and AWS CloudWatch. Perform cost optimization across the AWS infrastructure and execute savings strategies. Maintain high availability, security, and disaster recovery (DR) mechanisms. Implement Kubernetes (EKS), containers, and CI/CD pipelines. Proactively monitor system performance and troubleshoot production issues. Coordinate across product and development teams for deployments and DevOps planning. Must-Have Skills. 4+ years of hands-on experience with AWS Cloud Platform. Strong proficiency in Linux/Unix systems, Nginx, Apache, and Tomcat. Proficient in Python, Shell/Bash scripting for automation. Strong knowledge of SQL, PostgreSQL, and MongoDB. Familiarity with CloudFormation, IAM, VPC, S3, ECS/EKS/ECR. Monitoring experience with Prometheus, Grafana, and CloudWatch. Previous exposure to AWS cost optimization strategies. Excellent communication skills, self-driven, and a proactive attitude. Nice-to-Have Skills. Experience with Google Cloud Platform (GCP). Experience in container orchestration with Kubernetes (EKS preferred). Background in working with startups or fast-growing product environments. Knowledge of disaster recovery strategies and high availability setups. (ref:hirist.tech).

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Tech Overview: Every time a guest enters a Target store or browsesTarget.com, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities.Join our global in-house technology team of more than 4,000 of engineers, data scientists, architects, coaches and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview: Target.comand Mobile translates the in-store experience our guests love to the digital environment. Our Mobile Engineers develop native apps like Cartwheel and Targets flagship app, which are high-impact and high-visibility assets that are game-changers for literally millions of guests. Here, youll get to explore emerging retail and mobile technologies, playing a key role in revolutionary product launches with tech giants like Apple and Google. Youll be a visionary for the future of Targets app ecosystem. Youll have the advantage of Targets unmatched brand recognition and special marketplace footholdmaking us the partner of choice for innovative technologies like indoor mapping, iBeacons and Apple Pay. Youll help Target evolve by using the latest open source tools and technologies and staying true to strong agile practices. Youll lend your passion for engineering technologies that fix problems and meet needs guests didnt even know they had. Youll work on autonomous teams and incorporate the newest technical practices. Youll have the chance to perform by writing rock-solid code that stands up to our massive scale. Plus, and perhaps best of all, youll have the right balance of self-rule and accountability for how technical products perform. Team Overview: We are dedicated to ensuring a seamless and efficient checkout experience for Guests shopping on our digital channels, including web and mobile apps. Our team plays a crucial role in the overall shopping journey, focusing on the final and most critical steps of the purchase process. We are responsible for managing the entire checkout lifecycle , from the moment a Guest adds an item to their cart to the final purchase confirmation. Our goal is to provide a smooth, secure, and user-friendly checkout process that enhances customer satisfaction and drives conversions. Our team is cross-geo located, with members driving different features and collaborating from both India and the US. This diverse setup allows us to leverage a wide range of expertise and perspectives, fostering innovative solutions and effective problem-solving. As part of the Digital Checkout team, you will have the opportunity to work with cutting-edge technologies and innovative solutions to continuously improve the checkout experience. Our collaborative and dynamic environment encourages creative problem-solving and the sharing of ideas to meet the evolving needs of our Guests. Position Overview: 5+ years of experience in software design & development with 3+ years of experience in building scalable backend applications using Java Demonstrates broad and deep expertise in Java/Kotlin and frameworks. Designs, develops, and approves end-to-end functionality of a product line, platform, or infrastructure. Communicates and coordinates with project team, partners, and stakeholders. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health. Maintains deep technical knowledge within areas of expertise. Stays current with new and evolving technologies via formal training and self-directed education. Experience integrating with third party and opensource frameworks. About You * 4 year degree or equivalent experience * Experience4 years -7 years * Programming experience with Java - Springboot & Kotlin - micronaut * Strong problem-solving skills with a good understanding of data structures and algorithms. * Must have exposure to non-relational databases like MongoDB. * Must have exposure to distributed systems and microservice architecture. * Good to Have exposure to Data Pipeline, ML Ops, Spark, Python * Demonstrates a solid understanding of the impact of own work on the team and/or guests * Writes and organizes code using multiple computer languages, including distributed programming and understand different frameworks and paradigm * Delivers high-performance, scalable, repeatable, and secure deliverables with broad impact (high throughput and low latency) * Influences and applies data standards, policies, and procedures * Maintains technical knowledge within areas of expertise * Stays current with new and evolving technologies via formal training and self-directed education. Know More Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 day ago

Apply

10.0 - 12.0 years

11 - 16 Lacs

Pune

Work from Office

Naukri logo

Project Manager Oracle Fusion Applications1 Job SummaryWe are seeking an experienced Project Manager to lead Oracle Fusion Applications (Finance, SCM, HCM), Fusion Tech and Oracle EBS IT services projects. The ideal candidate will have a strong background in successfully executing end-to-end Oracle Fusion implementation projects and managing cross-functional teams to deliver projects on time, within budget, and with high quality. Key Responsibilities:Lead the planning, execution, and delivery of Oracle Fusion Applications (Finance, SCM, HCM) and EBS , Technology and support projects. Manage end-to-end project lifecycle from initiation to go-live and post-production support. Coordinate with stakeholders, business users, functional and technical teams to ensure project goals are achieved. Develop and maintain project plans, resource allocation plans, risk management logs, and issue logs. Ensure adherence to project management best practices, governance standards, and customer satisfaction targets. Provide regular project status updates to senior management and clients. Identify and mitigate project risks proactively. Manage change requests and ensure scope, timeline, and costs are controlled. Guide teams in adopting Oracle Cloud implementation methodologies. Maintain high levels of team motivation and performance. Required Qualifications:10-12 years of overall experience in IT project management. Minimum 2 end-to-end Oracle Fusion Application (Finance, SCM, and/or HCM) implementation projects successfully delivered. Strong knowledge of Oracle Fusion Cloud modules and Oracle EBS modules. Project Management Certification (PMP, PRINCE2, or equivalent) is mandatory. Strong leadership, communication, and stakeholder management skills. Ability to work with cross-functional global teams in a dynamic environment. Hands-on experience with project management tools (e.g., MS Project, Jira, Smartsheet, etc.). Preferred Skills: Experience managing multi-pillar Oracle Cloud projects (FIN + SCM + HCM). Familiarity with Agile and Hybrid project methodologies. Previous consulting background is a plus. Education:Bachelor's degree in Information Technology, Business Administration, Engineering, or a related field. MBA is a plus (not mandatory).

Posted 1 day ago

Apply

11.0 - 17.0 years

25 - 30 Lacs

Pune

Work from Office

Naukri logo

Oracle Cloud Finance Architect Presales & Solutioning1 Seeking a highly experienced Oracle Finance Architect with a strong background in presales, solution design, and delivery of Oracle ERP Finance solutions. The ideal candidate will play a key role in supporting sales teams by providing expert-level finance solutioning to prospects, ensuring the best fit between Oracle solutions and client business needs. This role will also involve architecting comprehensive Oracle Finance solutions that cover General Ledger, Accounts Payable, Accounts Receivable, Fixed Assets, Cash Management, and Costing modules, with integration across enterprise systems.

Posted 1 day ago

Apply

8.0 - 13.0 years

16 - 20 Lacs

Pune

Work from Office

Naukri logo

AWS Solution Architect/DevOps1 :We are seeking a highly skilled AWS DevOps Engineer / Solution Architect with a strong background in designing and implementing data-driven and API-based solutions. The ideal candidate will have deep expertise in AWS architecture, a passion for creating scalable, secure, and high-performance systems, and the ability to align technology solutions with business goals.Key Responsibilities Design and Architect Solutions: Develop and architect scalable, secure, and efficient cloud-based solutions on AWS for data and API-related projects. Infrastructure as Code: Implement infrastructure automation using tools such as Terraform, CloudFormation, or AWS CDK. API Development and Integration: Architect and implement RESTful APIs, ensuring high availability, scalability, and security using AWS API Gateway, Lambda, and related services. Data Solutions: Design and optimize data pipelines, data lakes, and storage solutions using AWS services like S3, Redshift, RDS, and DynamoDB. CI/CD Pipelines: Build, manage, and optimize CI/CD pipelines to automate deployments, testing, and infrastructure provisioning (Jenkins, CodePipeline, etc.). Monitoring and Optimization: Ensure robust monitoring, logging, and alerting mechanisms are in place using tools like CloudWatch, Prometheus, and Grafana. Collaboration and Best Practices: Work closely with cross-functional teams (development, data engineering, security) to implement DevOps best practices and deliver innovative cloud solutions. Security and Compliance: Implement AWS security best practices, including IAM, encryption, VPC, and security monitoring to ensure solutions meet security and compliance standards. Cost Optimization: Continuously optimize AWS environments for performance, scalability, and cost-effectiveness. Qualifications 8+ years of experience in AWS cloud architecture, with a focus on data and API solutions. Expertise in AWS core services such as EC2, S3, Lambda, API Gateway, RDS, DynamoDB, Redshift, and CloudFormation. Hands-on experience with infrastructure as code (IaC) tools like Terraform, AWS CDK, or CloudFormation. Proficiency in API design and development , particularly RESTful APIs and serverless architectures. Strong understanding of CI/CD pipelines , version control (Git), and automation tools. Knowledge of networking, security best practices, and AWS Well-Architected Framework . Experience with containerization technologies such as Docker and orchestration tools like Kubernetes or AWS ECS/EKS. Excellent problem-solving skills and ability to work independently and in a team environment. AWS Certifications such as AWS Certified Solutions Architect(Associate/Professional)or AWS Certified DevOps Engineer are highly preferred.

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

Senior Consultant1 Salesforce developer with a Minimum 5 Years of experience Candidate must have Salesforce Admin certification and Developer certifications Candidatemust haveApex Coding hands-on experience Candidate should have Good Analytical skills, Debugging and Bug fix experience Candidates must have hands-on development experience on Lightening Candidate must have Sales and Service Cloud module experience Candidate must have code deployment experience through VSS, Jenkins Candidate must have good communication skills to interact with the Customer Candidate should have Agile development experience

Posted 1 day ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

SQL Server DBA2 Key Responsibilities : Database Management : Manage, configure, and maintain SQL Server databases in high-availability environments (Active-Active). Ensure optimal database performance, tuning queries, and optimizing resources. Monitor and resolve database performance issues, minimizing downtime. Cluster Setup and Management : Design, implement, and support SQL Server AlwaysOn availability groups and failover clustering . Ensure high availability and disaster recovery across multi-node SQL Server clusters. Monitor and manage cluster health, replication, and failover processes. On-Prem to AWS Migration : Lead and execute migration of on-premise SQL Server databases to AWS cloud, including RDS and EC2 setups. Plan, architect, and execute database migration strategies with minimal downtime and risk. Configure databases in AWS environments, ensuring scalability, performance, and security. Backup, Recovery, and Disaster Recovery : Implement and manage backup and recovery strategies to safeguard critical data. Test and maintain disaster recovery plans to ensure database availability during unforeseen events. Manage database replication between on-premises and cloud infrastructure. Security and Compliance : Ensure databases adhere to security and compliance policies, implementing data encryption , access control , and auditing . Regularly conduct security audits and apply patches and updates to mitigate vulnerabilities. Collaborate with security teams to meet industry compliance standards. Support and Troubleshooting : Provide 24/7 support for production databases, ensuring rapid resolution of any issues. Troubleshoot and resolve complex database issues related to performance, availability, and security. Document procedures, guidelines, and best practices for maintaining and optimizing SQL Server environments. Automation and Scripting : Automate routine database tasks, including backups, monitoring, and performance tuning using tools such as PowerShell , T-SQL , or other automation tools. Continuously improve efficiency and reduce manual intervention in database operations. Collaboration : Work closely with development, infrastructure, and operations teams to ensure smooth database operations across all environments. Provide guidance and best practices for database design, development, and optimization. Required Skills and Qualifications : 5+ years of experience as a SQL Server DBA in enterprise environments. Strong knowledge of SQL Server clustering , AlwaysOn availability groups , and Active-Active setups . Experience with on-premises to AWS migration for SQL Server databases. Expertise in AWS RDS , EC2 , and other AWS database services. Proven experience in performance tuning , query optimization , and database maintenance . Strong understanding of backup, recovery , and disaster recovery strategies . Proficiency in T-SQL , PowerShell , and other automation scripting languages. Excellent troubleshooting, problem-solving, and communication skills.

Posted 1 day ago

Apply

3.0 - 6.0 years

8 - 12 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

Reporting Lead - Watlow1 Experience - 8 to 15 Must have worked on all type of projects ( implementation , support ) - ERP - EBS , Fusion cloud Guided / mentored team of at least 10 Team player Good communicstion skills Must be able to get requirements from business Complete understandong of project life cycle Skills - Good hands on and experience on Oracle fusion modules BI - OTBI reports , Different templates - RTF , Excel , PDF etc SQL , Fusion tables Good understaning REST and SOAP APIs , Personalization , Ess jobs Fusion techno - functional

Posted 1 day ago

Apply

8.0 - 12.0 years

18 - 22 Lacs

Pune

Work from Office

Naukri logo

Salesforce Health Cloud Architect1 Roles and Responsibilities: Solution DesignDesign and recommend best-practice solutions based on client business needs. ConfigureSalesforceHealth Cloud, create and refine complex data models, and implement business process automation. Provide pre-sales support, including effort estimates and staffing decisions for proposed solutions. GatheringLead discovery and requirements refinement sessions to uncover business, functional, and technological requirements. Help Innovate within the Salesforce platform, including the conception and design of innovative accelerators. Stay updated on newSalesforceproduct capabilities resulting from releases and acquisitions. Minimum Qualifications : ActiveSalesforceHealth Cloud accreditation and ability to achieve additional relevant certifications upon hire. MBA or BE degree in Computer Science or a Healthcare related specialization. 8-12 years of professional experience. Understanding of Salesforce Health Cloud ,Industry processes, experience in estimation and solution design

Posted 1 day ago

Apply

2.0 - 7.0 years

13 - 17 Lacs

Chennai

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 day ago

Apply

1.0 - 5.0 years

12 - 16 Lacs

Chennai

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 day ago

Apply

4.0 - 9.0 years

12 - 17 Lacs

Chennai

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 day ago

Apply

3.0 - 8.0 years

14 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Information Technology Group, Information Technology Group > IT Software Developer General Summary: Qualcomm OneIT team is looking for a talented senior Full-Stack Developer to join our dynamic team and contribute to our exciting projects. The ideal candidate will have strong understanding of Java, Spring Boot, Angular/React and AWS technologies as well as experience in designing, managing and deploying applications to the cloud.Key Responsibilities: Design, develop and maintain web applications using Java, Spring Boot, and Angular/React. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, maintainable, and efficient code. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Help maintain code quality, organization, and automation. Stay up-to-date with the latest industry trends and technologies. Minimum Qualifications: 3+ years of IT-relevant work experience with a Bachelor's degree in a technical field (e.g., Computer Engineering, Computer Science, Information Systems). OR 5+ years of IT-relevant work experience without a Bachelor’s degree. 3+ years of any combination of academic or work experience with Full-stack Application Development (e.g., Java, Python, JavaScript, etc.) 1+ year of any combination of academic or work experience with Data Structures, algorithms, and data stores. Candidate should have: Bachelor's degree in Computer Science, Engineering, or a related field. 5-7 years of experience with minimum 3 years as a Full-Stack developer using Java, Spring Boot and Angular/React. Strong proficiency in Java and Spring Boot. Experience with front-end frameworks such as Angular or React. Familiarity with RESTful APIs and web services. Knowledge of database systems like Oracle, MySQL, PostgreSQL, or MongoDB. Experience with AWS services such as EC2, S3, RDS, Lambda, and API Gateway. Understanding of version control systems, preferably Git. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Experience with any other programming language like C#, Python Knowledge of containerization technologies like Docker and Kubernetes. Familiarity with CI/CD pipelines and DevOps practices. Experience with Agile/Scrum/SAFe methodologies. Bachelors or Master’s degree in information technology, computer science or equivalent.

Posted 1 day ago

Apply

6.0 - 10.0 years

19 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Strong Python, Flask, REST API, and NoSQL skills. AWS Developer Associate certification is required. . Architect, build, and maintain secure, scalable backend services on AWS platforms. Utilize core AWS services and serverless technologies. Provident fund

Posted 1 day ago

Apply

1.0 - 4.0 years

11 - 15 Lacs

Mumbai

Work from Office

Naukri logo

This is an Internal document. Job Title:- Entry Level – Data Engineer/SDE About Kotak Mahindra GroupEstablished in 1985, the Kotak Mahindra Group is one of India’s leading financial services conglomerates. In February 2003, Kotak Mahindra Finance Ltd (KMFL), the group’s flagship company, received a banking license from the Reserve Bank of India (RBI). With this, KMFL became the first non-banking finance company in India to become a bank – Kotak Mahindra Bank Limited. The consolidated balance sheet of Kotak Mahindra Group is over 1 lakh crore and the consolidated net worth of the Group stands at 13,943 crore (approx. US$2.6 billion) as on September 30, 2012. The Group offers a wide range of financial services that encompass every sphere of life. From commercial banking, to stock broking, mutual funds, life insurance and investment banking, the Group caters to the diverse financial needs of individuals and the corporate sector. The Group has a wide distribution network through branches and franchisees across India, the international offices in London, New York, California, Dubai, Abu Dhabi, Bahrain, Mauritius and Singapore. For information, please visit the company’s website at http://www.kotak.com What we offer Our mission is simple – Building trust. Our customers' trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages the entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on the greenfield project to revamp the entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities for technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, spark, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be an early member in digital transformation journey of Kotak, learn and leverage technology to build complex data platform solutions including, real time, micro batch, batch and This is an Internal document. analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticalsData Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and build data models in a config based and programmatic and think big to build one of the most leveraged data models for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data built by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics use cases. Data Governance The team will be the central data governance team for Kotak bank building and managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve the right data skills and are ready for building high concurrency systems involving multiple systems from scratch, then this is the team for you. :- Responsibilities:- Develop, and maintain scalable data pipelines and databases. Assist in collecting, cleaning, and transforming data from various sources. Work with AWS services like EC2, EKS, EMR, S3, Glue, Redshift, and MWAA. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Work with data scientists and analysts to provide necessary data for analysis. Ensure data quality and integrity throughout the data lifecycle. Participate in code reviews and contribute to a collaborative, high-performing team environment. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Create state of art software solutions which are durable and reusable for multiple teams This is an Internal document. :- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Proficiency in SQL and experience with at least one programming language (Python, Scala, Java, etc.). Familiarity with data warehousing concepts and ETL processes. Understanding of big data technologies like Hadoop, Spark, etc., is a plus. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good to Have:- Experience with cloud platforms (AWS, Azure, Google Cloud). Familiarity with data visualization tools (Tableau, PowerBI). Personal Attributes:- Strong written and verbal communication skills Self-starter who requires minimal oversight Ability to prioritize and manage multiple tasks

Posted 1 day ago

Apply

1.0 - 4.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

This is an Internal document. Job Title:- Entry Level – Data Engineer/SDE About Kotak Mahindra GroupEstablished in 1985, the Kotak Mahindra Group is one of India’s leading financial services conglomerates. In February 2003, Kotak Mahindra Finance Ltd (KMFL), the group’s flagship company, received a banking license from the Reserve Bank of India (RBI). With this, KMFL became the first non-banking finance company in India to become a bank – Kotak Mahindra Bank Limited. The consolidated balance sheet of Kotak Mahindra Group is over 1 lakh crore and the consolidated net worth of the Group stands at 13,943 crore (approx. US$2.6 billion) as on September 30, 2012. The Group offers a wide range of financial services that encompass every sphere of life. From commercial banking, to stock broking, mutual funds, life insurance and investment banking, the Group caters to the diverse financial needs of individuals and the corporate sector. The Group has a wide distribution network through branches and franchisees across India, the international offices in London, New York, California, Dubai, Abu Dhabi, Bahrain, Mauritius and Singapore. For information, please visit the company’s website at http://www.kotak.com What we offer Our mission is simple – Building trust. Our customers' trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages the entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on the greenfield project to revamp the entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities for technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, spark, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be an early member in digital transformation journey of Kotak, learn and leverage technology to build complex data platform solutions including, real time, micro batch, batch and This is an Internal document. analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticalsData Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and build data models in a config based and programmatic and think big to build one of the most leveraged data models for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data built by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics use cases. Data Governance The team will be the central data governance team for Kotak bank building and managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve the right data skills and are ready for building high concurrency systems involving multiple systems from scratch, then this is the team for you. :- Responsibilities:- Develop, and maintain scalable data pipelines and databases. Assist in collecting, cleaning, and transforming data from various sources. Work with AWS services like EC2, EKS, EMR, S3, Glue, Redshift, and MWAA. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Work with data scientists and analysts to provide necessary data for analysis. Ensure data quality and integrity throughout the data lifecycle. Participate in code reviews and contribute to a collaborative, high-performing team environment. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Create state of art software solutions which are durable and reusable for multiple teams This is an Internal document. :- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Proficiency in SQL and experience with at least one programming language (Python, Scala, Java, etc.). Familiarity with data warehousing concepts and ETL processes. Understanding of big data technologies like Hadoop, Spark, etc., is a plus. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good to Have:- Experience with cloud platforms (AWS, Azure, Google Cloud). Familiarity with data visualization tools (Tableau, PowerBI). Personal Attributes:- Strong written and verbal communication skills Self-starter who requires minimal oversight Ability to prioritize and manage multiple tasks

Posted 1 day ago

Apply

1.0 - 4.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

This is an Internal document. Job Title:- Entry Level – Data Engineer/SDE About Kotak Mahindra GroupEstablished in 1985, the Kotak Mahindra Group is one of India’s leading financial services conglomerates. In February 2003, Kotak Mahindra Finance Ltd (KMFL), the group’s flagship company, received a banking license from the Reserve Bank of India (RBI). With this, KMFL became the first non-banking finance company in India to become a bank – Kotak Mahindra Bank Limited. The consolidated balance sheet of Kotak Mahindra Group is over 1 lakh crore and the consolidated net worth of the Group stands at 13,943 crore (approx. US$2.6 billion) as on September 30, 2012. The Group offers a wide range of financial services that encompass every sphere of life. From commercial banking, to stock broking, mutual funds, life insurance and investment banking, the Group caters to the diverse financial needs of individuals and the corporate sector. The Group has a wide distribution network through branches and franchisees across India, the international offices in London, New York, California, Dubai, Abu Dhabi, Bahrain, Mauritius and Singapore. For information, please visit the company’s website at http://www.kotak.com What we offer Our mission is simple – Building trust. Our customers' trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages the entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on the greenfield project to revamp the entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities for technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, spark, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be an early member in digital transformation journey of Kotak, learn and leverage technology to build complex data platform solutions including, real time, micro batch, batch and This is an Internal document. analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticalsData Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and build data models in a config based and programmatic and think big to build one of the most leveraged data models for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data built by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics use cases. Data Governance The team will be the central data governance team for Kotak bank building and managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve the right data skills and are ready for building high concurrency systems involving multiple systems from scratch, then this is the team for you. :- Responsibilities:- Develop, and maintain scalable data pipelines and databases. Assist in collecting, cleaning, and transforming data from various sources. Work with AWS services like EC2, EKS, EMR, S3, Glue, Redshift, and MWAA. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Work with data scientists and analysts to provide necessary data for analysis. Ensure data quality and integrity throughout the data lifecycle. Participate in code reviews and contribute to a collaborative, high-performing team environment. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Create state of art software solutions which are durable and reusable for multiple teams This is an Internal document. :- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Proficiency in SQL and experience with at least one programming language (Python, Scala, Java, etc.). Familiarity with data warehousing concepts and ETL processes. Understanding of big data technologies like Hadoop, Spark, etc., is a plus. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good to Have:- Experience with cloud platforms (AWS, Azure, Google Cloud). Familiarity with data visualization tools (Tableau, PowerBI). Personal Attributes:- Strong written and verbal communication skills Self-starter who requires minimal oversight Ability to prioritize and manage multiple tasks

Posted 1 day ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Administration Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, manage project timelines, and contribute to the overall success of application development initiatives. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with business goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Administration.- Strong understanding of cloud computing concepts and services.- Experience with infrastructure as code tools such as Terraform or CloudFormation.- Familiarity with containerization technologies like Docker and Kubernetes.- Knowledge of security best practices in cloud environments. Additional Information:- The candidate should have minimum 7.5 years of experience in AWS Administration.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 day ago

Apply

Exploring Amazon EC2 Jobs in India

Amazon EC2 (Elastic Compute Cloud) is a popular cloud computing service offered by Amazon Web Services (AWS). With the increasing adoption of cloud technology in India, the demand for professionals skilled in managing and optimizing EC2 instances is on the rise. Job seekers looking to enter this field have a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for Amazon EC2 roles: - Bangalore - Mumbai - Delhi - Hyderabad - Pune

Average Salary Range

The average salary range for Amazon EC2 professionals in India varies based on experience levels. Entry-level positions typically start at INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career progression in the Amazon EC2 domain may include roles such as: - Junior Developer - AWS Cloud Engineer - Senior Cloud Architect - AWS Solutions Architect - Tech Lead

Related Skills

In addition to expertise in Amazon EC2, professionals in this field are often expected to have knowledge of the following skills: - AWS Services (e.g., S3, RDS, IAM) - Linux/Unix Systems Administration - Networking concepts - Scripting languages (e.g., Python, Shell) - Security best practices in cloud computing

Interview Questions

Here are 25 interview questions for Amazon EC2 roles:

  • What is Amazon EC2 and how does it differ from traditional servers? (basic)
  • How can you launch an EC2 instance? (basic)
  • What is an AMI in EC2? (basic)
  • Explain the difference between on-demand and reserved instances. (medium)
  • How do you secure an EC2 instance? (medium)
  • What is an EIP and how is it used in EC2? (medium)
  • Describe the difference between instance store and EBS volumes. (medium)
  • How do you monitor EC2 instances? (medium)
  • How can you automate EC2 instance provisioning? (medium)
  • Explain the concept of instance metadata in EC2. (advanced)
  • How does EC2 Auto Scaling work? (advanced)
  • What is the significance of placement groups in EC2? (advanced)
  • How can you troubleshoot performance issues in EC2 instances? (advanced)
  • Explain the process of migrating an EC2 instance to a different availability zone. (advanced)

Closing Remark

As the demand for Amazon EC2 professionals continues to grow in India, job seekers should focus on honing their skills and preparing effectively for interviews. By showcasing their expertise in managing and optimizing EC2 instances, candidates can secure rewarding opportunities in the dynamic field of cloud computing. Prepare diligently, stay updated on industry trends, and apply confidently to land your dream job in Amazon EC2.

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies