Home
Jobs

272 S3 Jobs - Page 4

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Qualification & Experience: Minimum of 8 years of experience as a Data Scientist/Engineer with demonstrated expertise in data engineering and cloud computing technologies. Technical Responsibilities Excellent proficiency in Python, with a strong focus on developing advanced skills. Extensive exposure to NLP and image processing concepts. Proficient in version control systems like Git. In-depth understanding of Azure deployments. Expertise in OCR, ML model training, and transfer learning. Experience working with unstructured data formats such as PDFs, DOCX, and images. O Strong familiarity with data science best practices and the ML lifecycle. Strong experience with data pipeline development, ETL processes, and data engineering tools such as Apache Airflow, PySpark, or Databricks. Familiarity with cloud computing platforms like Azure, AWS, or GCP, including services like Azure Data Factory, S3, Lambda, and BigQuery. Tool Exposure: Advanced understanding and hands-on experience with Git, Azure, Python, R programming and data engineering tools such as Snowflake, Databricks, or PySpark. Data mining, cleaning and engineering: Leading the identification and merging of relevant data sources, ensuring data quality, and resolving data inconsistencies. Cloud Solutions Architecture: Designing and deploying scalable data engineering workflows on cloud platforms such as Azure, AWS, or GCP. Data Analysis : Executing complex analyses against business requirements using appropriate tools and technologies. Software Development : Leading the development of reusable, version-controlled code under minimal supervision. Big Data Processing : Developing solutions to handle large-scale data processing using tools like Hadoop, Spark, or Databricks. Principal Duties & Key Responsibilities: Leading data extraction from multiple sources, including PDFs, images, databases, and APIs. Driving optical character recognition (OCR) processes to digitize data from images. Applying advanced natural language processing (NLP) techniques to understand complex data. Developing and implementing highly accurate statistical models and data engineering pipelines to support critical business decisions and continuously monitor their performance. Designing and managing scalable cloud-based data architectures using Azure, AWS, or GCP services. Collaborating closely with business domain experts to identify and drive key business value drivers. Documenting model design choices, algorithm selection processes, and dependencies. Effectively collaborating in cross-functional teams within the CoE and across the organization. Proactively seeking opportunities to contribute beyond assigned tasks. Required Competencies: Exceptional communication and interpersonal skills. Proficiency in Microsoft Office 365 applications. Ability to work independently, demonstrate initiative, and provide strategic guidance. Strong networking, communication, and people skills. Outstanding organizational skills with the ability to work independently and as part of a team. Excellent technical writing skills. Effective problem-solving abilities. Flexibility and adaptability to work flexible hours as required. Key competencies / Values: Client Focus : Tailoring skills and understanding client needs to deliver exceptional results. Excellence : Striving for excellence defined by clients, delivering high-quality work. Trust : Building and retaining trust with clients, colleagues, and partners. Teamwork : Collaborating effectively to achieve collective success. Responsibility : Taking ownership of performance and safety, ensuring accountability. People : Creating an inclusive environment that fosters individual growth and development.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

We are looking for "AWS Engineer" with Minimum 5 years experience Contact- Yashra (95001 81847) Required Candidate profile Hands-on working knowledge AWS services, such as IAM, Cloud formation, EC2, S3, Lambda, VPCs, Load balancing, networking, Monitoring Health

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Designation: Java Developer Experience: 5+ Years Location : Bangalore/Mumbai NP : Immediate Joiners JD: Please find below the JD for the position. Proven experience in Java development, with a strong understanding of object-oriented programming principles. Experience with AWS services, including ECS, S3, RDS, Elasticache and CloudFormation. Experience with microservices architecture and RESTful API design. Strong problem-solving skills and attention to detail. Experience in the financial services industry, particularly in trading or risk management, is a plus. Excellent communication and collaboration skills.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Ahmedabad

Work from Office

Naukri logo

Responsibilities: * Lead Infor FSM implementations from planning to go-live. * Configure S3, Infor ION, IBM Data Studio, IPD, ISD & IPA modules. * Collaborate with clients on gathering project requirements. Annual bonus

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Designation : Python + AWS Experience : 5+ Years Work Location : Bangalore / Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Job Description : Mandatory Skills: Python Data structures pandas, numpy Data Operations - DataFrames, Dict, JSON, Lists, Tuples, Strings Oops & APIs(Flask/FastAPI) AWS services(IAM, EC2, Lambda, S3, DynamoDB, etc) Sincerely, Sonia TS

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Naukri logo

Job Description:As a SecOps Engineer, you will be responsible for ensuring the security and compliance of our systems and infrastructure. You will work closely with our development, architecture and DevOps teams to identify and remediate vulnerabilities, implement security best practices, automate security processes and ensure compliance with corporate and industry standards. You will also conduct security assessments of our systems and infrastructure to identify vulnerabilities and risks, identify risk owners and implement mitigating controls.Required Skills/Experience - Must have 5 or more years of relevant job experience on AWS - Must be proficient in at least one scripting language such as bash, python, etc. - Experience with remediating security in IAM, S3, AWS Security Groups, NACL, IGW, VPC Network Firewall, VPC, Endpoints and other AWS resources - Expertise in writing JSON IAM and S3 policies deep understanding of AWS policy language - Experience in AWS account security auditing - Experience in scripting using the AWS APIs (boto3 or AWS cli) - Good understand of TCP/IP networking principles and protocols - Experience maintaining secure and reliable cloud infrastructure (OS patch, backup, monitoring, secure logging, and user account creation) - Support or modify underlying AWS infrastructure and services for security hardening - Familiar with cloud deployment automation, as well as build CI/CD pipeline to support cloud-based workloads - Stay on top of the latest AWS security trends and develop expertise in emerging cloud security technologies - Develop and maintain technical documentation in Atlassian Confluence - Proficient using Atlassian Jira ticketing and project management - Experience troubleshooting technical security issues - Must be proficient in both verbal and written communication in English - Must be available to work mostly on Pacific daytime hours (until to 2 pm Pacific time)Desired Skills/Experience - AWS Security-Specialty certification with minimum 3 years practical experience securing AWS environments - Masters/Bachelors degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical field, and two years of experience in related software or systems - 8+ years overall industry experience.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities - Manage and monitor AWS cloud infrastructure, including EC2, S3, VPC, RDS, Lambda, and more. - Implement and maintain Ubuntu Linux servers and applications. - Monitor system performance, conduct backups, and address potential issues. - Set up and maintain MySQL databases, optimizing performance and ensuring data integrity. - Collaborate with development teams to design, develop, and deploy secure cloud-based applications. - Implement and maintain cloud security best practices. - Provide technical support and guidance on cloud infrastructure and related technologies. - Stay updated on industry trends and best practices. Preferred candidate profile - Bachelor's degree in Computer Science, IT, or related field. - 5-8 years of overall experience, with a minimum of 3 years in AWS cloud services. - Strong Ubuntu Linux administration skills. - Familiarity with AWS services and cloud security best practices. - Strong problem-solving skills and the ability to work independently and in a team. - Excellent communication skills. - Basic understanding of MySQL database administration is a plus. - Relevant AWS certifications are a plus.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 14 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Job Summary Synechron is seeking a skilled Full Stack Developer to join our innovative technology team. This position focuses on designing, developing, and maintaining high-performance, scalable web applications using Next.js and related modern technologies. As a key contributor, you will collaborate with cross-disciplinary teams to deliver responsive and user-centric solutions that support the organizations digital growth and strategic objectives. Your expertise will help ensure the delivery of seamless, secure, and efficient web experiences for our clients and stakeholders. Software Requirements Required Skills and Experience: Proficiency in Next.js, React, and modern JavaScript/TypeScript frameworks Strong experience with .NET Core, C#, and building scalable web APIs Hands-on experience designing and consuming GraphQL APIs Practical knowledge of AWS services such as EC2, S3, Lambda, and RDS Familiarity with version control systems, particularly Git Experience with CI/CD pipelines and automation tools like Jenkins or TeamCity Working knowledge of Agile frameworks and tools such as Jira and Confluence Preferred Skills: Containerization skills with Docker and Kubernetes Knowledge of testing frameworks for unit and integration testing Understanding of security best practices and data protection regulations Overall Responsibilities Develop, enhance, and maintain web applications leveraging Next.js for front-end and .NET for back-end components Build, optimize, and consume RESTful and GraphQL APIs to enable efficient data exchange Deploy, monitor, and scale cloud-based applications using AWS services, ensuring high availability and performance standards Collaborate actively with UX/UI designers, product managers, and fellow developers to deliver high-quality solutions Participate in code reviews, pair programming, and the adoption of best coding practices Continuously evaluate emerging technologies and recommend improvements for application architecture and performance Contribute to project planning, documentation, and technical decision-making for application features and integrations Technical Skills (By Category) Programming Languages: Required: JavaScript (including TypeScript), C# Preferred: Additional JavaScript frameworks/libraries, such as Redux or MobX Databases / Data Management: Required: Experience with relational databases such as MSSQL or Oracle, and NoSQL solutions like MongoDB Cloud Technologies: Required: AWS (EC2, S3, Lambda, RDS) Preferred: Azure cloud platform expertise Frameworks and Libraries: Required: Next.js, React Preferred: State management libraries, testing frameworks like Jest or Mocha Development Tools and Methodologies: Required: Git, CI/CD tools (Jenkins, TeamCity), version control practices Preferred: Containerization with Docker, orchestration with Kubernetes Other: Familiarity with Agile/Scrum processes using Jira and Confluence Security & Compliance: Understanding of secure coding practices, data privacy, and compliance regulations relevant to web development Experience Requirements 5 to 12 years of experience in full-stack web development, with demonstrable expertise in Next.js and .NET technologies Proven track record in developing scalable, production-grade web applications Experience working within Agile environments, participating in sprint planning and continuous delivery Industry experience in fintech, e-commerce, or enterprise solutions is a plus but not mandatory Prior leadership or mentoring experience is advantageous Day-to-Day Activities Architect, develop, and maintain feature-rich, responsive web applications Collaborate with cross-functional teams on feature design, implementation, and testing Develop and optimize APIs and facilitate data integration across systems Conduct code reviews, unit testing, and performance tuning to ensure code quality Manage deployment processes and monitor application health in cloud environments Engage in regular stand-ups, planning sessions, and technical discussions Identify, troubleshoot, and resolve software defects and performance issues promptly Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, Information Technology, or related field Certifications in cloud technologies (e.g., AWS Certified Solutions Architect) or web development are a plus Evidence of continuous learning through industry certifications, courses, or self-driven projects Strong portfolio demonstrating previous work with Next.js, React, and cloud-based application deployment Professional Competencies Strong analytical and problem-solving skills to address complex technical challenges Effective communication and stakeholder management abilities Leadership qualities in mentoring team members and driving technical discussions Ability to adapt quickly to changing project requirements and technological advances Innovation-driven mindset to explore new tools, frameworks, and best practices Strong organizational skills for managing multiple tasks and meeting deadlines

Posted 2 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Pune, Hinjewadi

Work from Office

Naukri logo

Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

12 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Responsibilities: * #Key skills: AWS core services, EC2, S3, IAM, CloudWatch, VPC, CloudFormation/Launch Templates, CI/CD pipelines, GitHub Actions, AWS CodePipeline, AWS Config, Security Hub, and S3 lifecycle rules, Bash or Python. .

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Noida

Remote

Naukri logo

Location: Remote (Sometimes come to office for project requirement) Notice Period: Immediate joiner Timing: Canadian EST (Night Shift for India). For good candidates 4PM-12AM will also work. About the Role Alyssum Global is hiring for a skilled Full Stack Developer with 3+ years of hands-on experience in JavaScript, TypeScript, and modern frameworks such as Angular, React, Next.js, Nest.js, or similar. The ideal candidate should have a strong understanding of Agile methodology, test-driven development, and deployment on AWS. Key Responsibilities Develop, maintain, and enhance web applications using JavaScript, TypeScript, Angular, React, Next.js, or Nest.js. Design and implement scalable and efficient front-end and back-end architectures. Write clean, modular, and testable code while adhering to coding standards and best practices. Conduct thorough testing of the codebase to ensure reliability and performance. Collaborate with cross-functional teams to deliver high-quality software solutions in an Agile environment. Deploy, monitor, and manage applications on AWS servers. Debug, troubleshoot, and resolve technical issues in a timely manner. Stay up to date with emerging trends, technologies, and frameworks. Must have Qualifications 3-5 years of experience in software development. Hands-on experience with modern frameworks like Angular, React, Next.js, Nest.js, or similar. Strong understanding of Agile methodology and experience working in Agile teams. Familiarity with test-driven development and testing frameworks like Cypress, Playwright, or similar. Experience deploying applications on AWS servers (EC2, S3, Lambda, etc.). Good understanding of CI/CD pipelines. Knowledge of RESTful APIs and WebSocket integrations.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

9 - 12 Lacs

Gurugram

Work from Office

Naukri logo

Job Title: DevOps Engineer Linux/AWS/Ansible Experience: 4 Years Location: Gurgaon (Onsite) Notice Period: Immediate Joiner Job Summary: We are looking for a skilled DevOps Engineer with a strong background in Linux system administration , and hands-on experience with AWS and Ansible . Familiarity with GCP is a plus. The ideal candidate should be passionate about automation, infrastructure management, and system optimization. This is a full-time onsite role based in Gurgaon for candidates who can join immediately . Key Responsibilities: Manage, configure, and maintain Linux-based systems and servers. Automate infrastructure using Ansible for deployment, configuration, and scaling. Deploy and manage AWS cloud infrastructure (EC2, S3, IAM, VPC, etc.). Perform system monitoring, log analysis, and performance tuning. Implement CI/CD pipelines and ensure secure, scalable, and reliable infrastructure. Troubleshoot and resolve system, network, and application issues. Collaborate with developers and IT teams to support application deployment. Exposure to GCP for minor cloud-related tasks or integration is desirable. Required Skills: Minimum 4 years of experience in a DevOps/System Admin role. Strong hands-on experience in Linux system administration . Proficiency in AWS cloud services . Practical experience with Ansible for automation and configuration management. Basic understanding or exposure to GCP is a plus. Knowledge of scripting languages like Bash or Python is preferred. Good communication and problem-solving skills.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities: Design, implement, and maintain AWS infrastructure ensuring scalability and high availability utilizing infrastructure as code (IaC). Manage and optimize Windows Server environments, focusing on security and reliability. Collaborate with development and operations teams to automate and streamline processes. Monitor system performance and resolve issues to prevent outages. Participate in an on-call rotation to address urgent issues and maintain system integrity. Develop and maintain documentation for system configuration and procedures. Develop and implement automation scripts and tools to streamline deployment activities. Required Qualifications: Minimum of five years of experience in Cloud/SRE/DevOps or a related field. Proven experience with AWS services including EC2, VPC, S3, RDS, and others. Strong proficiency in managing Windows Server and Linux environments. Experience with AWS IAM and security protocols. Familiarity with tools like Terraform, PowerShell, and Docker for automation. Proficiency in writing comprehensive technical documentation. Nice to Have: Expertise with Microsoft Entra ID (Azure AD) and AWS IAM. Knowledge of Windows Server Remote Desktop Services on AWS. Experience using SAML for authentication in Windows Domains. Familiarity with RDS databases (Oracle and MS SQL), especially conversion to AWS RDS. Experience in Identity and Access Management (IAM) across organizations and applications.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

AWS Developers + Java (Pref) (or Python ) + AWS CDK (Pref) We don't need AWS Infra If we don't have AWS CDK pool availability, we can look for S3 or Lambda. Min 1-2 yrs in AWS and overall 4+ yrs of exp.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Naukri logo

Machine Learning & Data Pipelines Strong understanding of Machine Learning principles, lifecycle, and deployment practices Experience in designing and building ML pipelines Knowledge of deploying ML models on AWS Lambda, EKS, or other relevant services Working knowledge of Apache Airflow for orchestration of data workflows Proficiency in Python for scripting, automation, and ML model development with Data Scientists Basic understanding of SQL for querying and data analysis Cloud and DevOps Experience Hands-on experience with AWS services, including but not limited to: AWS Glue, Lambda, S3, SQS, SNS Proficient in checking and interpreting CloudWatch logs and setting up alarm. Infrastructure as Code (IaC) experience using Terraform Experience with CI/CD pipelines, particularly using GitLab for code and infrastructure deployments Understanding of cloud cost optimization and budgeting, with the ability to assess cost implications of various AWS services

Posted 2 weeks ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

Chennai

Work from Office

Naukri logo

We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

30 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

We need immediate joiners or those who are serving notice period and can join in another 10-15 days. No other candidate i.e. who are on bench or official 3, 2 months NP. Strong working experience in design and development of RESTful APIs using Java, Spring Boot and Spring Cloud. Technical hands-on experience to support development, automated testing, infrastructure and operations Fluency with relational databases or alternatively NoSQL databases Excellent pull request review skills and attention to detail Experience with streaming platforms (real-time data at massive scale like Confluent Kafka). Working experience in AWS services like EC2, ECS, RDS, S3 etc. Understanding of DevOps as well as experience with CI/CD pipelines Industry experience in Retail domain is a plus. Exposure to Agile Methodology and project tools: Jira, Confluence, SharePoint. Working knowledge in Docker Container/Kubernetes Excellent team player, ability to work independently and as part of a team Experience in mentoring junior developers and providing technical leadership Familiarity with Monitoring & Reporting tools (Prometheus, Grafana, PagerDuty etc). Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space Knowledge of front-end framework using React or Angular and any other programming languages like JavaScript/TypeScript or Python is a plus

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Team : We are looking for a highly motivated, enthusiastic, and skilled engineering lead for Commodity Insights. We strive to deliver solutions that are sector-specific, data-rich, and hyper-targeted for evolving business needs. Our Software development Leaders are involved in the full product life cycle, from design through release. The resource would be joining a strong innovative team working on the content management platforms which support a large revenue stream for S&P Commodity Insights. Working very closely with the Product owner and Development Manager, teams are responsible for the development of user enhancements and maintaining good technical hygiene. The successful candidate will assist in the design, development, release and support of content platforms. Skills required include ReactJS, Spring Boot, RESTful microservices, AWS services (S3, ECS, Fargate, Lambda, etc.), CSS HTML, AJAX JSON, XML and SQL (PostgreSQL/Oracle), . The candidate should be aware of GEN AI or LLM models like Open AI and Claude etc. The candidate should be enthusiast in working on prompt building related to GenAI and business-related prompts. The candidate should be able to develop and optimize prompts for AI models to improve accuracy and relevance. The candidate must be able to work well with a distributed team, demonstrate an ability to articulate technical solutions for business requirements, have experience with content management/packaging solutions, and embrace a collaborative approach for the implementation of solutions. Responsibilities : Lead and mentor a team through all phases of the software development lifecycle, adhering to agile methodologies (Analyze, design, develop, test, debug, and deploy). Ensure high-quality deliverables and foster a collaborative environment. Be proficient with the use of developer tools supporting the CI/CD process including configuring and executing automated pipelines to build and deploy software components Actively contribute to team planning and ceremonies and commit to team agreement and goals Ensure code quality and security by understanding vulnerability patterns, running code scans, and be able to remediate issues. Mentoringthe junior developers. Make sure that code review tasks on all user storiesare added and timely completed. Perform reviews and integration testing to assure quality of project development eorts Design database schemas, conceptual data models, UI workows and application architectures that t into the enterprise architecture Support the user base, assisting with tracking down issues and analyzing feedback to identify product improvements Understand and commit to the culture of S&P Global: the vision, purpose and values of the organization Basic Qualifications : 10+ years experience in an agile team development role, delivering software solutions using Scrum Java, J2EE, Javascript, CSS/HTML, AJAX ReactJS, Spring Boot, Microservices, RESTful services, OAuth XML, JSON, data transformation SQL and NoSQL Databases (Oracle, PostgreSQL) Working knowledge of Amazon Web Services (Lambda, Fargate, ECS, S3, etc.) Experience on GEN AI or LLM models like Open AI and Claude is preferred. Experience with agile workflow tools (e.g. VSTS, JIRA) Experience with source code management tools (e.g. git), build management tools (e.g. Maven) and continuous integration/delivery processes and tools (e.g. Jenkins, Ansible) Self-starter able to work to achieve objectives with minimum direction Comfortable working independently as well as in a team Excellent verbal and written communication skills Preferred Qualifications: Analysis of business information patterns, data analysis and data modeling Working with user experience designers to deliver end-user focused benefits realization Familiar with containerization (Docker, Kubernetes) Messaging/queuing solutions (Kafka, etc.) Familiar with application security development/operations best practices (including static/dynamic code analysis tools) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit . Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

4.0 - 7.0 years

5 - 16 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Roles and Responsibilities : Design, develop, test, deploy and maintain Snowflake data warehouses for clients. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Develop ETL processes using Python scripts to extract data from various sources and load it into Snowflake tables. Troubleshoot issues related to Snowflake performance tuning, query optimization, and data quality. Job Requirements : 4-7 years of experience in developing large-scale data warehouses on AWS using Snowflake. Strong understanding of Lambda expressions in Snowflake SQL language. Experience with Python programming language for ETL development.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Qualifications/Skill Sets: Experience 8+ years of experience in software engineering with at least 3+ years as a Staff Engineer or Technical Lead level. Architecture Expertise: Proven track record designing and building large-scale, multi-tenant SaaS applications on cloud platforms (e.g., AWS, Azure, GCP). Tech Stack: Expertise in modern backend languages (e.g., Java, Python, Go, Node.js), frontend frameworks (e.g., React, Angular), and database systems (e.g., PostgreSQL, MySQL, NoSQL). Cloud & Infrastructure: Strong knowledge of containerization (Docker, Kubernetes), serverless architectures, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform, CloudFormation). End to end development and deployment experience in cloud applications Distributed Systems: Deep understanding of event-driven architecture, message queues (e.g., Kafka, RabbitMQ), and microservices. Security: Strong focus on secure coding practices and familiarity with identity management (OAuth2, SAML) and data encryption. Communication: Excellent verbal and written communication skills with the ability to present complex technical ideas to stakeholders. Problem Solving: Strong analytical mindset and a proactive approach to identifying and solving system bottlenecks.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelors degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Noida, Pune, Gurugram

Work from Office

Naukri logo

We are hiring AWS developer with Banking or Financial Domain experience AWS Developer Location of Job - Noida, Gurugram and Pune Shift Timing - 1:00PM-10:00 PM Job Description Must Have Skills Domain - Financial or Banking Expertise in AWS CDK or terraform, Services(Lambda, ECS, S3) and PostgreSQL DB management. Strong understanding serverless architecture and event-driven design(SNS, SQS). Nice to have: Knowledge of multi-account AWS Setups and Security best practices (IAM, VPC, etc.), Experience in cost optimization strategies in AWS. Interested candidates please share your updated resume on anu.c@irissoftware.com with below details - Current company- Current CTC- Expected CTC- Relevant experience in AWS- Any experience in CDK or Terraform- Notice Period, If serving please share your LWD- Current location- Open for which location Noida, Gurugram and Pune- Open for shift time 01:00 pm to 10:00 pm - Regards, Anu

Posted 2 weeks ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Hyderabad

Remote

Naukri logo

Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/

Posted 2 weeks ago

Apply

9.0 - 12.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Remote

Naukri logo

As a Lead Engineer, you will play a critical role in shaping the technical direction of our projects. You will be responsible for leading a team of developers undertaking Creditsafe s digital transformation to our cloud infrastructure on AWS. Your expertise in Data Engineering, Python and AWS will be crucial in building and maintaining high-performance, scalable, and reliable systems. Key Responsibilities: Technical Leadership: Lead and mentor a team of engineers, providing guidance and support to ensure high-quality code and efficient project delivery. Software Design and Development: Collaborate with cross-functional teams to design and develop data-centric applications, microservices, and APIs that meet project requirements. AWS Infrastructure: Design, configure, and manage cloud infrastructure on AWS, including services like EC2, S3, Lambda, and RDS. Performance Optimization: Identify and resolve performance bottlenecks, optimize code and AWS resources to ensure scalability and reliability. Code Review: Conduct code reviews to ensure code quality, consistency, and adherence to best practices. Security: Implement and maintain security best practices within the codebase and cloud infrastructure. Documentation: Create and maintain technical documentation to facilitate knowledge sharing and onboarding of team members. Collaboration: Collaborate with product managers, architects, and other stakeholders to deliver high-impact software solutions. Research and Innovation: Stay up to date with the latest Python, Data Engineering and AWS technologies, and propose innovative solutions that can enhance our systems. Troubleshooting: Investigate and resolve technical issues and outages as they arise. Qualifications: Bachelor's or higher degree in Computer Science, Software Engineering, or a related field. Proven experience as a Data Engineer with a strong focus on AWS services. Solid experience in leading technical teams and project management. Proficiency in Python, including deep knowledge of data engineering implementation patterns. Strong expertise in AWS services and infrastructure setup. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to troubleshoot complex technical issues. Strong communication and teamwork skills. A passion for staying updated with the latest industry trends and technologies.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies