Jobs
Interviews

184 Aws S3 Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

17 - 30 Lacs

kolkata, hyderabad/secunderabad, bangalore/bengaluru

Hybrid

Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts

Posted Date not available

Apply

4.0 - 6.0 years

10 - 15 Lacs

pune, chennai, bengaluru

Hybrid

Hiring for Big Data Lead Experience : 4-6yrs Work location : Bangalore/Pune/Chennai Work Mode: Hybrid Notice Period : Imm - 30 days Primary Skills : AWS S3, DMS, Glue, Lambda, Redshift, Python, SQL, Git, CI/CD, Agile delivery

Posted Date not available

Apply

6.0 - 11.0 years

16 - 31 Lacs

hyderabad, chennai, bengaluru

Hybrid

ETL DeveloperData Modeling Tools like ErwinSnowflake, Oracle, Amazon RDS (Aurora, Postgres), DB2, SQL server and Casandra,Apache Sqoop, AWS S3, Hue, AWS CLI, Amazon EMR, Amazon MSK, Amazon Sagemaker, Apache SparkAutosys, SFTP, AirFlow

Posted Date not available

Apply

8.0 - 13.0 years

15 - 30 Lacs

pune, chennai, bengaluru

Hybrid

Role & responsibilities Key Responsibilities • Design end-to-end data lakehouse architecture on AWS • Create data ingestion, transformation, and modeling strategies • Guide metadata-driven framework development • Ensure scalability, performance tuning, and security design • Collaborate with Security, Infra, and QA teams for architecture compliance Required Skills & Tech Stack • AWS S3, Glue, Glue Catalog, DMS, Redshift, Lambda, DynamoDB • Lakehouse architecture patterns • Data governance & lineage (DataZone, Glue Crawlers) • Python, PySpark, Terraform (preferred) Certifications & Other Requirements • AWS Certified Solutions Architect (Professional preferred) • 10+ years in data engineering/architecture • Strong documentation and stakeholder communication • Experience with security frameworks (SAST, DAST, VAPT) and regulatory requirements

Posted Date not available

Apply

4.0 - 9.0 years

17 - 25 Lacs

hyderabad, chennai, bengaluru

Hybrid

ETL Developer Snowflake, Oracle, Amazon RDS (Aurora, Postgres), DB2, SQL server and Casandra,Apache Sqoop, AWS S3, Hue, AWS CLI, Amazon EMR, Sagemaker, Apache SparkErwin,(Extract, Transform, Load)Snowflake, Oracle, Amazon RDS

Posted Date not available

Apply

3.0 - 5.0 years

10 - 12 Lacs

bengaluru

Work from Office

Job Description: We are looking for a self-motivated, highly skilled and experienced AI/ML Engineer to be part of our growing team. You will be responsible for developing and deploying cutting-edge machine learning models to solve real-world problems. Your responsibilities will include data preparation, model training, evaluation, and deployment, as well as collaborating with data scientists and software engineers to ensure our AI solutions are effective and scalable. As a Machine Learning Engineer, you will be responsible for developing and optimizing pipelines for both inference and training processes. Your expertise will be crucial in Amazon SageMaker, you need to build, train, and deploy machine learning and foundation models at scale with infrastructure. Experience Level: ~ 4 years. Key Responsibilities: Utilize AI solutions and tools provided by AWS to build segmentation models based on customer behavior and usage patterns. Automatically generate periodic reports. Develop functionality for defining reusable segmentation criteria tailored to marketing objectives. Required Skill Set: Hands-on experience with AWS S3, Lambda, Glue, SageMaker, Athena, QuickSight, etc. Python programming, conceptual understanding of ML algorithms, deep learning techniques, and prior experience with AWS is required. Understanding of serverless architectures and event-driven processing flows. Prior experience in working AI solutions and tools provided by AWS is must. Qualifications: Bachelor or Master's degree in Computer Science or related field. Prior Industry Experience in machine learning frameworks or projects is must.

Posted Date not available

Apply

5.0 - 8.0 years

20 - 30 Lacs

bengaluru

Remote

We are a forward-thinking team focused on redefining customer support through intelligent automation. As part of our Contact Center Automation team, you will help build the next generation of voice-based customer service powered by conversational AI and backend engineering. Job Location: Remote Experience Required: 5+ Years Key Responsibilities: Design and develop scalable backend solutions using Node.js and TypeScript. Build and maintain REST APIs that integrate with our voice bot and conversational AI platform. Develop conversational flows in Google Dialogflow CX based on design requirements. Collaborate with cross-functional teams in an Agile/Scrum environment. Conduct code reviews and write unit tests using Jest to ensure high-quality software. Debug, troubleshoot, and resolve issues across the application stack. Take ownership of deliverables from inception to deployment and maintenance. Work closely with product and business teams to understand requirements and deliver impactful solutions. Required Skills: Minimum 5 years of backend development experience . Strong proficiency in Node.js and TypeScript . Experience in building and consuming RESTful APIs . Solid understanding of Object-Oriented Programming (OOP) and Functional Programming principles. Strong unit testing experience using Jest or similar frameworks. Familiarity with Agile/Scrum methodologies . Excellent written and verbal communication skills in English. Ability to work remotely with a stable internet connection . Preferred/Bonus Skills: Experience with Google Cloud Platform (GCP) services. Knowledge of Google Dialogflow CX or other NLP platforms. Exposure to Voice Bot / Contact Center Automation solutions. Frontend development experience (React or Angular). Contributions to open-source projects. What We Offer: Remote-friendly work culture Opportunity to work on cutting-edge AI-driven products Multicultural and collaborative team environment Learning and growth opportunities in a fast-paced setup How to Apply: If you're passionate about backend development and conversational AI, apply now with your updated resume. Email: sapna@orangesiri.com

Posted Date not available

Apply

6.0 - 11.0 years

20 - 35 Lacs

hyderabad, chennai, bengaluru

Work from Office

Job Title: Data Engineer Location: Chennai or Hyderabad Employment Type: Full-Time / Contract Job Description: We are seeking a highly skilled and experienced Data Engineer with strong expertise in Big Data technologies, Python, and AWS . The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines, ensuring high performance, availability, and scalability of data solutions in a cloud-based environment. Key Responsibilities: Design and implement scalable data pipelines using Python and Big Data frameworks. Develop and optimize complex SQL queries for data processing and reporting. Build and automate shell scripts for data workflows and orchestration. Manage data storage and transfers using AWS S3 and other AWS services Collaborate with data scientists, analysts, and engineering teams for seamless data flow. Must-Have Skills: Big Data Concepts Core Python Programming Ability to write clean, efficient code SQL Advanced querying and optimization Shell Scripting AWS S3 – Data storage and access management Good-to-Have Skills: Event-driven architecture / AWS SQS Microservices and API development Kafka , Kubernetes , Argo Workflows Amazon Redshift , Amazon Aurora Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 6–9 years of relevant experience in data engineering roles (Candidates with 9+ years will be considered for senior/lead roles). Strong problem-solving and analytical skills. Excellent communication and team collaboration abilities.

Posted Date not available

Apply

4.0 - 8.0 years

15 - 25 Lacs

gurugram, bengaluru

Work from Office

Role & responsibilities Design and implement complex cloud-based solutions using AWS services (S3 bucket, Lambda, Bedrock, etc) Design and optimize database schemas and queries, particularly with DynamoDB OR any database Write, test, and maintain high-quality Java, API, Python code for cloud-based applications Collaborate with cross-functional teams to identify and implement cloud-based solutions Ensure security, compliance, and best practices in cloud infrastructure Troubleshoot and resolve complex technical issues in cloud environments Mentor junior engineers and contribute to the team's technical growth Stay up-to-date with the latest cloud technologies and industry trends Preferred candidate profile Bachelor's degree in Computer Science, Engineering, or a related field 4-8 years of experience in cloud engineering, with a strong focus on AWS Extensive experience with Java , AWS, API, Python programming and software development Strong knowledge of database systems, particularly DynamoDB or any database Hands On experience in AWS services (S3 bucket, Lambda, Bedrock etc) Excellent problem-solving and analytical skills Strong communication and collaboration abilities

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies