Jobs
Interviews

435 S3 Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12 - 16 years

35 - 37 Lacs

Visakhapatnam

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Thiruvananthapuram

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Coimbatore

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Hyderabad

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Nagpur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Jaipur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Lucknow

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Kanpur

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Pune

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Ahmedabad

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Surat

Work from Office

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 2 months ago

Apply

7 - 9 years

30 - 32 Lacs

Chennai, Bengaluru

Work from Office

Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred Cloud Engineer, AWS Migration, AWS MGN

Posted 2 months ago

Apply

8 - 13 years

0 Lacs

Bengaluru

Hybrid

The role of this Data/AWS engineer will be to develop data pipelines for specialty instrument data and Gen AI processes, support the development of classification and prediction models, create and maintain dashboards to monitor data health, and set up and maintain services in AWS to deploy models, and collect results. These pipelines will be part of foundational emerging data infrastructure for the company. We seek someone with a growth mindset who is self-motivated, a problem solver, and someone energized by working at the nexus of leading-edge software and hardware development. This position will follow a hybrid model work approach( 3 days a week , Tuesday, Wednesday and Thursday working from GCC office, RMZ ecoworld, Bellandur, Bangalore). • Build data pipelines in AWS using S3, Lambda, IoT core, EC2, and other services. • Create and maintain dashboards to monitor data health. • Containerize models and deploy them to AWS • Build python data pipelines that can handle data frames and matrices, ingest, transform, and store data using pythonic code practices. • Maintain codebase: use OOP and/or FP best practices, write unit tests, etc. • Work with Machine Learning engineers to evaluate data and models, and present results to stakeholders in a manner understandable to non-data scientists. • Mentor and review code of other members of the team. Required Qualifications: Bachelors in computer science or related field and at least 8 years relevant work experience, or equivalent. • Solid experience in AWS services such as S3, EC2, Lambda, and IAM. • Experience containerizing and deploying code in AWS. • Proficient writing OOP and/or functional programming code in Python (e.g., numpy, pandas, scipy, scikit-learn). • Comfortable with Git version control, as well as BASH or command prompt. • Comfortable discovering and driving new capabilities, solutions, and data best practices from blogs, white papers, and other technical documentation. • Able to communicate results using meaningful metrics and visualizations to managers and stakeholders and receive feedback. Desired (considered a plus) Qualifications: Experience with C#, C++ and .NET. What We Offer: Hybrid role with competitive compensation, great benefits, and continuous professional development. • An inclusive environment where everyone can contribute their best work and develop to their full potential. • Reasonable adjustments to the interview process according to your needs.

Posted 2 months ago

Apply

3 - 8 years

8 - 12 Lacs

Greater Noida

Work from Office

Sound experience in developing Python applications using Fast API or Flask. (Fast API is preferrable). Proficient in OOPs, Design patterns and functional programming. Hand on experinece with MySql or MongoDB and can manage the complex queries. Good experince of GIT versioning tool. Should have worked with server less architecture and RESTful systems. Experience of API development in Python Hands on experience in AWS services: Lambda, SQS, S3, ECS etc. Experience in using Python Classes using inheritance, overloading and polymorphism Experience in building Serverless applications in AWS using API Gateway and Lambda Experience in Insurance projects is preferable. Note: We are not looking candidate from ML(Machine learning) & Data Science domain. This opening is only for Web/API development in Python and its frameworks.

Posted 2 months ago

Apply

3 - 5 years

15 - 19 Lacs

Bengaluru

Work from Office

Immediate Joiners Job Summary We are seeking an experienced DevOps Engineer to join our team and help us build and maintain scalable, secure, and efficient infrastructure on Amazon Web Services (AWS). The ideal candidate will have a strong background in DevOps practices, AWS services, and scripting languages. Key Responsibilities Design and Implement Infrastructure: Design and implement scalable, secure, andefficient infrastructure on AWS using services such as EC2, S3, RDS, and VPC. Automate Deployment Processes: Automate deployment processes using tools such as AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Implement Continuous Integration and Continuous Deployment (CI/CD): Implement CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitor and Troubleshoot Infrastructure: Monitor and troubleshoot infrastructure issuesusing tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Collaborate with Development Teams: Collaborate with development teams to ensuresmooth deployment of applications and infrastructure. Implement Security Best Practices: Implement security best practices and ensurecompliance with organizational security policies. Optimize Infrastructure for Cost and Performance: Optimize infrastructure for cost andperformance using tools such as AWS Cost Explorer and AWS Trusted Advisor. Requirements Education: Bachelors degree in Computer Science, Information Technology, or relatedfield. Experience: Minimum 3 years of experience in DevOps engineering, with a focus on AWSservices. Technical Skills: AWS services such as EC2, S3, RDS, VPC, and Lambda. Scripting languages such as Python, Ruby, or PowerShell. CI/CD tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitoring and troubleshooting tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work in a team environment and collaborate with development teams. Nice to Have Certifications: AWS certifications such as AWS Certified DevOps Engineer or AWSCertified Solutions Architect. Experience with Containerization: Experience with containerization using Docker orKubernetes. Experience with Serverless Computing: Experience with serverless computing using AWSLambda or Azure Functions.

Posted 2 months ago

Apply

1 - 4 years

6 - 10 Lacs

Bengaluru

Work from Office

What Youll Own Full Stack Systems: Architect and build end-to-end applications using Flask, FastAPI, Node.js, React (or Next.js), and Tailwind. AI Integrations: Build and optimize pipelines involving LLMs (OpenAI, Groq, LLaMA), Whisper, TTS, embeddings, RAG, LangChain, LangGraph, and vector DBs like Pinecone/Milvus. Cloud Infrastructure: Deploy, monitor, and scale systems on AWS/GCP using EC2, S3, IAM, Lambda, Kafka, and ClickHouse. Real-time Systems: Design asynchronous workflows (Kafka, Celery, WebSockets) for voice-based agents, event tracking, or search indexing. System Orchestration: Set up scalable infra with autoscaling groups, Docker, and Kubernetes (PoC ready, if not full prod). Growth-Ready Features: Implement in-app nudges, tracking with Amplitude, AB testing, and funnel optimization. Tech Stack Youll Work With: Backend & Infrastructure Languages/Frameworks: Python (Flask, FastAPI), Node.js Databases: PostgreSQL, Redis, ClickHouse Infra: Kafka, Docker, Kubernetes, GitHub Actions, Cloudflare Cloud: AWS (EC2, S3, RDS), GCP Frontend React / Next.js, TailwindCSS, Zustand, Shadcn/UI WebGL, Three.js for 3D rendering AI/ML & Computer Vision LangChain, LangGraph, HuggingFace, OpenAI, Groq Whisper (ASR), Eleven Labs (TTS) Diffusion Models, StyleGAN, Stable Diffusion GANs, MediaPipe, ARKit/ARCore Computer Vision: Face tracking, real-time try-on, pose estimation Virtual Try-On: Face/body detection, cloth/hairstyle try-ons APIs Stripe, VAPI, Algolia, OpenAI, Amplitude Vector DB & Search Pinecone, Milvus (Zilliz), custom vector search pipelines Other Vibe Coding culture, prompt engineering, system-level optimization Must-Haves: 1+ years of experience building production-grade full-stack systems Fluency in Python and JS/TS (Node.js, React) shipping independently without handholding Deep understanding of LLM pipelines, embeddings, vector search, and retrieval-augmented generation (RAG) Experience with AR frameworks (ARKit, ARCore), 3D rendering (Three.js), and real-time computer vision (MediaPipe) Strong grasp of modern AI model architectures: Diffusion Models, GANs, AI Agent Hands-on with system debugging, performance profiling, infra cost optimization Comfort with ambiguity fast iteration, shipping prototypes, breaking things to learn faster Bonus if youve built agentic apps, AI workflows, or virtual try-ons

Posted 2 months ago

Apply

4 - 7 years

0 - 3 Lacs

Hyderabad, Pune, Chennai

Hybrid

Job Posting Title : Data Engineer (Snowflake _ AWS) Location: Chennai or Hyderabad Experience: 4 to 6 years Job Description: Data Engineer Role Summary This role focuses on building and optimizing secure data pipelines integrating AWS services and Snowflake to support de-identified data consumption by analytical tools and users. Key Responsibilities Integrate de-identified data from Amazon S3 into Snowflake for downstream analytics. Build robust ETL pipelines using Glue for data cleansing, transformation, and schema alignment. Automate ingestion of structured/unstructured data from various AWS services to Snowflake. Apply masking, redaction, or pseudonymization techniques to sensitive datasets pre-ingestion. Implement lifecycle and access policies for data stored in Snowflake and AWS S3. Collaborate with analytics teams to optimize warehouse performance and data modeling. Required Skills 46 years of experience in data engineering roles. Strong hands-on experience with Snowflake (warehouse sizing, query optimization, data sharing). Familiarity with AWS Glue, S3, and IAM. Understanding of PHI/PII protection techniques and HIPAA controls. Experience in transforming datasets for BI/reporting tools. Skilled in SQL, Python, and Snowflake stored procedures.

Posted 2 months ago

Apply

12 - 15 years

35 - 45 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Strong frontend development experience with ReactJS, JavaScript or TypeScript. Proficiency in HTML5, CSS3 & responsive design best practices. Hands-on exp with AWS Cloud Services, specifically designing systems with SNS, SQS, EC2, Lambda & S3. Required Candidate profile Expert-level exp in backend development using .NetCore, C# & EF Core. Strong expertise in PostgreSQL & efficient database design. Proficient in building & maintaining RESTful APIs at scale.

Posted 2 months ago

Apply

6 - 11 years

18 - 30 Lacs

Gurugram

Work from Office

Application layer technologies including Tomcat/Nodejs, Netty, Springboot, hibernate, Elasticsearch, Kafka, Apache flink Frontend technologies including ReactJs, Angular, Android/IOS Data storage technologies like Oracle, S3, Postgres, Mongodb

Posted 2 months ago

Apply

8 - 13 years

12 - 22 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Greetings of The Day...!!! We have an URGENT on-rolls opening for the position of "Snowflake Architect" at One of our reputed clients for WFH. Name of the Company - Confidential Rolls - Onrolls Mode of Employment - FTE / Sub-Con / Contract Job Location - Remote Job Work Timings Night Shift – 06.00 pm to 03.00 am IST Nature of Work – Work from Home Working Days – 5 Days Weekly Educational Qualification - Bachelor's degree in computer science, BCA, engineering, or a related field. Salary – Maximum CTC Would be 23LPA (Salary & benefits package will be commensurate with experience and qualifications, PF, Medical Insurance cover available) Language Known - English, Hindi, & local language. Experience – 9 Years + of relevant experience in the same domain. Job Summary: We are seeking a highly skilled and experienced Snowflake Architect to lead the design, development, and implementation of scalable, secure, and high-performance data warehousing solutions on the Snowflake platform. The ideal candidate will possess deep expertise in data modelling, cloud architecture, and modern ELT frameworks. You will be responsible for architecting robust data pipelines, optimizing query performance, and ensuring enterprise-grade data governance and security. In this role, you will collaborate with data engineers, analysts, and business stakeholders to deliver efficient data solutions that drive informed decision-making across the organization. Key Responsibilities: Manage and maintain the Snowflake platform to ensure optimal performance and reliability. Collaborate with data engineers and analysts to design and implement data pipelines. Develop and optimize SQL queries for efficient data retrieval and manipulation. Create custom scripts and functions using JavaScript and Python to automate platform tasks. Troubleshoot platform issues and provide timely resolutions. Implement security best practices to protect data within the Snowflake platform. Stay updated on the latest Snowflake features and best practices to continuously improve platform performance. Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. Minimum of Nine years of experience in managing any Database platform. Proficiency in SQL for data querying and manipulation. Strong programming skills in JavaScript and Python. Experience in optimizing and tuning Snowflake for performance. Preferred Skills: Technical Expertise Cloud & Integration Performance & Optimization Security & Governance Soft Skills THE PERSON SHOULD BE WILLING TO JOIN IN 07-10 DAYS TIME OR IMMEDIATE JOINER. Request for interested candidates; Please share your updated resume with us below Email-ID executivehr@monalisammllp.com, also candidate can call or WhatsApp us at 9029895581. Current /Last Net in Hand - Salary will be offered based on the interview /Technical evaluation process -- Notice Period & LWD was/will be - Reason for Changing the job - Total Years of Experience in Specific Field – Please specify the location which you are from – Do you hold any offer from any other association - ? Regards, Monalisa Group of Services HR Department 9029895581 – Call / WhatsApp executivehr@monalisammllp.com

Posted 2 months ago

Apply

10 - 15 years

17 - 22 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana) Career Level - IC4 Responsibilities Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana)

Posted 2 months ago

Apply

3 - 5 years

10 - 14 Lacs

Bengaluru

Work from Office

An experienced consulting professional who has an understanding of solutions, industry best practices, multiple business processes or technology designs within a product/technology family. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Over 3 to 5+ years of relevant IT experience with 5 + years in Oracle VBCS, OIC, PL/SQL, PCS based implementations as a Technical lead and senior developer Role is an Individual Contributor role. Being Hands-On is a critical requirement. Must Have: Experience Solution design for Customer engagements in the UI and Integration (OIC) space At least 5 project experience in developing SaaS Extensions using VBCS, OIC ORDS. Understanding of Inherent tools and technologies of SaaS Applications (FBDI, BIP, ADFDI, Applications Composer, Page Integration, etc.) Expertise in Oracle Visual Builder Studio, Good experience with Build and Release, Systems Integration, Agile, Estimations/Planning. Experience in configuring SSO PaaS extensions with Fusion SaaS Drive detailed design using customer requirements Good understanding and usage of OCI architecture, serverless functions, APU Gateway, object storage Conduct Design review to provide guidance and Quality assurance around standard methodologies and frameworks Experience in PCS is an added advantage. Good to have SOA/OSB/ODI/BPM skills. Have experience of building at least one project from scratch Experience with rolling out three big project (multiple phased release or country rollouts) to production. #LI-DNI Career Level - IC2 Responsibilities Standard assignments are accomplished without assistance by exercising independent judgment, within defined policies and processes, to deliver functional and technical solutions on moderately complex customer engagements. #LI-DNI

Posted 2 months ago

Apply

10 - 15 years

13 - 18 Lacs

Hyderabad

Work from Office

As a member of the Support organization, your focus is to deliver SaaS support and solutions to the Oracle customer in Customer Success Services while serving as an advocate for customer needs. This involves resolving SaaS Ap-plications technical/non-technical customer incidents via Incident Management System and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. You would be expected to be a hands-on lead with Oracle Fusion Supply Chain Management Functional/Technical skills. Career Level - IC4 Responsibilities As a Supply Chain Management Lead, you will offer strategic functional/technical support to assure the highest level of customer satisfaction. A primary focus is to create/utilize automated technology and instrumentation to diagnose, document, and resolve/avoid customer issues. You are expected to be an expert member of the technical problem solving/problem avoidance team, routinely sought after to address extremely complex, critical customer issues. Services may be frequently provided by on-site customer visits.

Posted 2 months ago

Apply

6 - 10 years

6 - 11 Lacs

Hyderabad

Work from Office

MS or BS in Computer Science or Equivalent 6-10+ years of relevant experience Strong Software Engineering Fundamentals API Development Proficiency in Data Structures Algorithms : Critical for designing efficient systems that handle large-scale data. System Design : Ability to design scalable, fault-tolerant systems and services. Coding Skills : Expertise in languages like Java, Python, or Scala, especially for backend systems development. RESTful Services : Proven experience in designing and building robust APIs for data access and integration. GraphQL: Knowledge of modern data access technologies for flexible querying. Microservices Architecture : Experience with creating microservices that handle different aspects of data management. Data Architecture and Design Patterns Database Design : Strong knowledge of both relational (SQL) and non-relational (NoSQL) databases like Oracle, MongoDB, Cassandra, or DynamoDB. Data Modeling : Ability to design and manage data models for efficient storage and retrieval. Data Warehousing : Experience with data warehouses and ETL processes. Data Access Controls : Experience with role-based access control (RBAC) and encryption techniques to secure sensitive data. Metadata Management : Familiarity with tools and processes for tracking data lineage, metadata catalogs (e.g., Apache Atlas, DataHub). Ability to handle complex data-related challenges, from dealing with incomplete or inconsistent data to optimizing performance. Strong analytical thinking to derive insights from data and build solutions that improve platform performance. Data Pipeline and Orchestration ETL/ELT Tools : Experience with data pipeline orchestration tools like Airflow, Prefect, or Dagster. Automation CI/CD : Familiarity with setting up CI/CD pipelines for data infrastructure. Distributed Systems : Experience with technologies like Hadoop, Spark, Kafka, and Flink for managing large-scale data processing pipelines. Data Streaming : Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Pulsar, or RabbitMQ). Cloud Platforms : Hands-on experience with cloud-based data services (AWS, GCP, Azure), including data storage (S3, GCS) and data analytics (EMR, Dataproc). Leadership and Collaboration Team Leadership : Ability to mentor junior engineers and guide them in building robust, scalable systems. Cross-Functional Collaboration : Experience working closely with data scientists, analysts, and other engineers to deliver on the platforms objectives. Stakeholder Management : Strong communication skills for presenting technical decisions to non-technical stakeholders. Career Level - IC4 Responsibilities Strong Software Engineering Fundamentals API Development Proficiency in Data Structures Algorithms : Critical for designing efficient systems that handle large-scale data. System Design : Ability to design scalable, fault-tolerant systems and services. Coding Skills : Expertise in languages like Java, Python, or Scala, especially for backend systems development. RESTful Services : Proven experience in designing and building robust APIs for data access and integration. GraphQL: Knowledge of modern data access technologies for flexible querying. Microservices Architecture : Experience with creating microservices that handle different aspects of data management. Data Architecture and Design Patterns Database Design : Strong knowledge of both relational (SQL) and non-relational (NoSQL) databases like Oracle, MongoDB, Cassandra, or DynamoDB. Data Modeling : Ability to design and manage data models for efficient storage and retrieval. Data Warehousing : Experience with data warehouses and ETL processes. Data Access Controls : Experience with role-based access control (RBAC) and encryption techniques to secure sensitive data. Metadata Management : Familiarity with tools and processes for tracking data lineage, metadata catalogs (e.g., Apache Atlas, DataHub). Ability to handle complex data-related challenges, from dealing with incomplete or inconsistent data to optimizing performance. Strong analytical thinking to derive insights from data and build solutions that improve platform performance. Data Pipeline and Orchestration ETL/ELT Tools : Experience with data pipeline orchestration tools like Airflow, Prefect, or Dagster. Automation CI/CD : Familiarity with setting up CI/CD pipelines for data infrastructure. Distributed Systems : Experience with technologies like Hadoop, Spark, Kafka, and Flink for managing large-scale data processing pipelines. Data Streaming : Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Pulsar, or RabbitMQ). Cloud Platforms : Hands-on experience with cloud-based data services (AWS, GCP, Azure), including data storage (S3, GCS) and data analytics (EMR, Dataproc). Leadership and Collaboration Team Leadership : Ability to mentor junior engineers and guide them in building robust, scalable systems. Cross-Functional Collaboration : Experience working closely with data scientists, analysts, and other engineers to deliver on the platforms objectives. Stakeholder Management : Strong communication skills for presenting technical decisions to non-technical stakeholders.

Posted 2 months ago

Apply

8 - 13 years

10 - 20 Lacs

Bengaluru

Work from Office

Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-1 Round(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies