Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3 - 6 years
6 - 15 Lacs
Hyderabad
Work from Office
Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025 Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV* Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*
Posted 4 months ago
7 - 9 years
10 - 15 Lacs
Mumbai
Work from Office
We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus.
Posted 4 months ago
6.0 - 10.0 years
3 - 6 Lacs
hyderabad
Work from Office
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities : - Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). - Model and structure data for performance, scalability, and usability. - Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. - Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing. - Build and manage data lake/lakehouse architectures in alignment with best practices. - Optimize ETL performance and manage cost-effective data operations. - Collaborate closely with cross-functional teams including data science, analytics, and software engineering. - Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications : - 7 to 10 years of relevant experience in bigdata engineering. - Advanced proficiency in Python, - Strong skills in SQL for complex data manipulation and analysis. - Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. - Proven track record of handling large-scale datasets (TBs) in production environments. - Cloud development experience with Azure (preferred), AWS, or GCP. - Solid understanding of data lake and data lakehouse architectures. - Expertise in ETL performance tuning and cost optimization techniques. - Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills : - Strong communication skills with the ability to explain complex technical concepts clearly and concisely. - Self-starter who learns quickly and takes ownership. - High attention to detail with a strong sense of data quality and reliability. - Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications : - Experience with tools like Apache Airflow, Azure Data Factory, or similar. - Familiarity with CI/CD and DevOps in the context of data engineering. - Knowledge of data governance, cataloging, and access control principles. Skills : Python,Sql,Aws,Azure, Hadoop
Posted Date not available
6.0 - 10.0 years
6 - 10 Lacs
gurugram
Work from Office
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities : - Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). - Model and structure data for performance, scalability, and usability. - Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. - Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing. - Build and manage data lake/lakehouse architectures in alignment with best practices. - Optimize ETL performance and manage cost-effective data operations. - Collaborate closely with cross-functional teams including data science, analytics, and software engineering. - Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications : - 7 to 10 years of relevant experience in bigdata engineering. - Advanced proficiency in Python, - Strong skills in SQL for complex data manipulation and analysis. - Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. - Proven track record of handling large-scale datasets (TBs) in production environments. - Cloud development experience with Azure (preferred), AWS, or GCP. - Solid understanding of data lake and data lakehouse architectures. - Expertise in ETL performance tuning and cost optimization techniques. - Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills : - Strong communication skills with the ability to explain complex technical concepts clearly and concisely. - Self-starter who learns quickly and takes ownership. - High attention to detail with a strong sense of data quality and reliability. - Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications : - Experience with tools like Apache Airflow, Azure Data Factory, or similar. - Familiarity with CI/CD and DevOps in the context of data engineering. - Knowledge of data governance, cataloging, and access control principles. Skills : Python,Sql,Aws,Azure, Hadoop.
Posted Date not available
6.0 - 10.0 years
6 - 10 Lacs
bengaluru
Work from Office
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities : - Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). - Model and structure data for performance, scalability, and usability. - Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. - Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing. - Build and manage data lake/lakehouse architectures in alignment with best practices. - Optimize ETL performance and manage cost-effective data operations. - Collaborate closely with cross-functional teams including data science, analytics, and software engineering. - Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications : - 7 to 10 years of relevant experience in bigdata engineering. - Advanced proficiency in Python, - Strong skills in SQL for complex data manipulation and analysis. - Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. - Proven track record of handling large-scale datasets (TBs) in production environments. - Cloud development experience with Azure (preferred), AWS, or GCP. - Solid understanding of data lake and data lakehouse architectures. - Expertise in ETL performance tuning and cost optimization techniques. - Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills : - Strong communication skills with the ability to explain complex technical concepts clearly and concisely. - Self-starter who learns quickly and takes ownership. - High attention to detail with a strong sense of data quality and reliability. - Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications : - Experience with tools like Apache Airflow, Azure Data Factory, or similar. - Familiarity with CI/CD and DevOps in the context of data engineering. - Knowledge of data governance, cataloging, and access control principles. Skills : Python,Sql,Aws,Azure, Hadoop
Posted Date not available
6.0 - 10.0 years
6 - 10 Lacs
chennai
Work from Office
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities : - Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). - Model and structure data for performance, scalability, and usability. - Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. - Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing. - Build and manage data lake/lakehouse architectures in alignment with best practices. - Optimize ETL performance and manage cost-effective data operations. - Collaborate closely with cross-functional teams including data science, analytics, and software engineering. - Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications : - 7 to 10 years of relevant experience in bigdata engineering. - Advanced proficiency in Python, - Strong skills in SQL for complex data manipulation and analysis. - Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. - Proven track record of handling large-scale datasets (TBs) in production environments. - Cloud development experience with Azure (preferred), AWS, or GCP. - Solid understanding of data lake and data lakehouse architectures. - Expertise in ETL performance tuning and cost optimization techniques. - Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills : - Strong communication skills with the ability to explain complex technical concepts clearly and concisely. - Self-starter who learns quickly and takes ownership. - High attention to detail with a strong sense of data quality and reliability. - Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications : - Experience with tools like Apache Airflow, Azure Data Factory, or similar. - Familiarity with CI/CD and DevOps in the context of data engineering. - Knowledge of data governance, cataloging, and access control principles. Skills : Python,Sql,Aws,Azure, Hadoop
Posted Date not available
6.0 - 10.0 years
6 - 10 Lacs
mumbai
Work from Office
We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment. Key Responsibilities : - Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond). - Model and structure data for performance, scalability, and usability. - Work with cloud infrastructure (preferably Azure) to build and optimize data workflows. - Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing. - Build and manage data lake/lakehouse architectures in alignment with best practices. - Optimize ETL performance and manage cost-effective data operations. - Collaborate closely with cross-functional teams including data science, analytics, and software engineering. - Ensure data quality, integrity, and security across all stages of the data lifecycle. Required Skills & Qualifications : - 7 to 10 years of relevant experience in bigdata engineering. - Advanced proficiency in Python, - Strong skills in SQL for complex data manipulation and analysis. - Hands-on experience with Apache Spark, Hadoop, or similar distributed systems. - Proven track record of handling large-scale datasets (TBs) in production environments. - Cloud development experience with Azure (preferred), AWS, or GCP. - Solid understanding of data lake and data lakehouse architectures. - Expertise in ETL performance tuning and cost optimization techniques. - Knowledge of data structures, algorithms, and modern software engineering practices. Soft Skills : - Strong communication skills with the ability to explain complex technical concepts clearly and concisely. - Self-starter who learns quickly and takes ownership. - High attention to detail with a strong sense of data quality and reliability. - Comfortable working in an agile, fast-changing environment with incomplete requirements. Preferred Qualifications : - Experience with tools like Apache Airflow, Azure Data Factory, or similar. - Familiarity with CI/CD and DevOps in the context of data engineering. - Knowledge of data governance, cataloging, and access control principles. Skills : Python,Sql,Aws,Azure, Hadoop
Posted Date not available
3.0 - 8.0 years
3 - 7 Lacs
bengaluru
Work from Office
Position/Title: Sr./ Principal Engineer (DataOps /MLOps ) Department: IT Shifts (if any) 2-11 PM Job Summary: As a DataOps/MLOps Engineer specializing in data, you will be responsible for implementing and managing our cloud-based data infrastructure using AWS and Snowflake. You will collaborate with data engineers, data scientists, and other stakeholders to design, deploy, and maintain a robust data ecosystem that supports our analytics and business intelligence initiatives. Your expertise in modern data tech stacks, MLOps methodologies, automation, and information security will be crucial in enhancing our data pipelines and ensuring data integrity and availability. Key Responsibilities: Infrastructure Management: Design, deploy, and manage AWS cloud infrastructure for data storage, processing, and analytics, ensuring high availability and scalability while adhering to security best practices. Data Pipeline Deployment: Collaborate with data engineering teams to deploy and maintain efficient data pipelines using tools like Apache Airflow, dbt, or similar technologies. Snowflake Administration: Implement and manage Snowflake data warehouse solutions, optimizing performance and ensuring data security and governance. MLOps Implementation: Collaborate with data scientists to implement MLOps practices, facilitating the deployment, monitoring, and governance of machine learning models in production environments. Information Security: Integrate security controls into all aspects of the data infrastructure, including encryption, access control, and compliance with data protection regulations (e.g., GDPR, HIPAA). CI/CD Implementation: Develop and maintain continuous integration and continuous deployment (CI/CD) pipelines for data-related applications and services, including model training and deployment workflows. Support and Troubleshooting: Deploy updates and fixes, provide Level 2 technical support, and perform root cause analysis of production errors to resolve technical issues effectively. Tool Development: Build tools to reduce the occurrence of errors and improve the customer experience, and develop software to integrate with internal back-end systems. Automation and Visualization: Develop scripts to automate data visualization and streamline reporting processes. System Maintenance: Design procedures for system troubleshooting and maintenance, ensuring smooth operation of the data infrastructure. Monitoring and Performance Tuning: Implement monitoring solutions to track data workflows and system performance, proactively identifying and resolving issues. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and support analytics initiatives. Documentation: Create and maintain documentation for data architecture, processes, workflows, and security protocols to ensure knowledge sharing and compliance. Qualifications: 3-6+ years of experience as a DataOps/MLOps engineer or in a similar engineering role. Strong expertise in AWS services (e.g., EC2, S3, Lambda, RDS) and cloud infrastructure best practices. Proficient in Snowflake, including data modeling, performance tuning, and query optimization. Experience with modern data technologies and tools (e.g., Apache Airflow, dbt, ETL processes). Familiarity with MLOps frameworks and methodologies, such as MLflow, Kubeflow, or SageMaker. Experience in containerization and orchestration tools (e.g., Docker, Kubernetes). Proficiency in scripting languages, including Python or similar, and automation frameworks. Proficiency with Git and GitHub workflows. Strong working experience of databases and SQL. Strong understanding of CI/CD tools and practices (e.g., Jenkins, GitLab CI). Excellent problem-solving attitude and collaborative team spirit. Strong communication skills, both verbal and written. Preferred Qualifications: Experience with data governance and compliance frameworks. Familiarity with data visualization tools (e.g., Tableau, Looker). Knowledge of machine learning frameworks and concepts is a plus. Relevant security certifications (e.g., CISSP, CISM, AWS Certified Security) are a plus. What We Offer: Competitive salary and benefits package. Opportunities for professional development and continuous learning. A collaborative and innovative work environment. Flexible work arrangements.
Posted Date not available
5.0 - 10.0 years
7 - 15 Lacs
chennai
Work from Office
About the Role Were seeking a highly skilled Data Engineer with a strong development background and a passion for transforming data into valuable insights. The ideal candidate will play a key role in designing, building, and maintaining scalable data pipelines and analytics solutions that support critical business decisions. What youll be doing Design, build, and optimize robust data pipelines for large-scale data processing Write complex SQL queries for data extraction, transformation, and reporting Collaborate with analytics and reporting teams to deliver data-driven insights Develop scalable solutions using programming languages such as Java, Python, and Node.js Integrate APIs and third-party data sources into analytics workflows Ensure data quality, integrity, and security across all data platforms Work cross-functionally to gather requirements and deliver on key business initiatives What we expect from you 5-10 years of hands-on experience in data engineering and development Proficiency in SQL and experience with relational and non-relational databases Development experience with Java, Python, Node.js, or similar languages Familiarity with analytics platforms, reporting tools, and data warehousing Solid understanding of data modeling, ETL processes, and pipeline architecture Excellent communication skills both written and verbal Tools/Technologies you will need to know Experience with modern data platforms such as Snowflake, ClickHouse, BigQuery, or Redshift Exposure to streaming technologies like Kafka, Apache Flink, or Spark Streaming Knowledge of workflow orchestration tools like Apache Airflow or Prefect Hands-on experience with CI/CD, Docker, or Kubernetes for data deployments Familiarity with cloud environments like AWS, Azure, or Google Cloud Platform Who we are looking for A sharp problem-solver with strong technical instincts Someone who thrives in fast-paced environments and full-time development roles A clear communicator who can explain complex data concepts across teams A team player with a collaborative mindset and a passion for clean, scalable engineering
Posted Date not available
4.0 - 8.0 years
4 - 9 Lacs
hyderabad
Remote
Job Title: Data Engineer GenAI Applications Company: Amzur Technologies Location: Hyderabad / Visakhapatnam / Remote (India) Experience: 48 Years Notice Period: Immediate to 15 Days (Preferred) Employment Type: Full-Time Position Overview We are looking for a skilled and passionate Data Engineer to join our GenAI Applications team. This role offers the opportunity to work at the intersection of traditional data engineering and cutting-edge AI/ML systems, helping us build scalable, cloud-native data infrastructure to support innovative Generative AI solutions. What We’re Looking For – Required Skills & Experience 4–8 years of experience in data engineering or related fields. Strong programming skills in Python and SQL , with experience in large-scale data processing. Proficient with cloud platforms (AWS, Azure, GCP) and native data services. Experience with open-source tools such as Apache NiFi, MLflow, or similar platforms. Hands-on experience with Apache Spark , Kafka , and Airflow . Skilled in working with both SQL and NoSQL databases , including performance tuning. Familiarity with modern data warehouses : Snowflake, Redshift, or BigQuery. Proven experience building scalable pipelines for batch and real-time processing. Experience implementing CI/CD pipelines and performance optimization in data workflows. Key Responsibilities Data Pipeline Development Design and optimize robust, scalable data pipelines for AI/ML model training and inference Enable batch and real-time data processing using big data technologies Collaborate with GenAI engineers to understand and meet data requirements Cloud Infrastructure & Tools Build and manage cloud-native data infrastructure using AWS, Azure, or Google Cloud Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation Ensure data reliability through monitoring and alerting system Preferred Skills Understanding of machine learning workflows and MLOps practices Familiarity with Generative AI concepts such as LLMs, RAG systems, and vector databases Experience implementing data quality frameworks and performance optimization Knowledge of model deployment pipelines and monitoring best practices
Posted Date not available
7.0 - 12.0 years
9 - 14 Lacs
mumbai
Work from Office
We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus
Posted Date not available
7.0 - 12.0 years
9 - 14 Lacs
mumbai
Work from Office
We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus
Posted Date not available
4.0 - 6.0 years
10 - 15 Lacs
bengaluru
Hybrid
Primary Skill - 3-5 years hands-on experience in Apache Airflow, Any Cloud platform (AWS/Azure/GCP), Python Secondary: GitHub, ELK, Jenkins, Docker, Terraform, Amazon EKS Scope and Responsibilities: As a Senior Engineer with a focus on Managed Airflow Platform (MAP) support engineering, you will: Evangelize and cultivate adoption of Global Platforms, open-source software and agile principles within the organization Ensure solutions are designed and developed using a scalable, highly resilient cloud native architecture Ensure the operational stability, performance, and scalability of cloud-native platforms through proactive monitoring and timely issue resolution Diagnose infrastructure and system issues across cloud environments and Kubernetes clusters, and lead efforts in troubleshooting and remediation Collaborate with engineering and infrastructure teams to manage configurations, resource tuning, and platform upgrades without disrupting business operations Maintain clear, accurate runbooks, support documentation, and platform knowledge bases to enable faster onboarding and incident response Support observability initiatives by improving logging, metrics, dashboards, and alerting frameworks Advocate for operational excellence and drive continuous improvement in system reliability, cost-efficiency, and maintainability Work with product management to support product / service scoping activities Work with leadership to define delivery schedules of key features through an agile framework Be a key contributor to overall architecture, framework and design of global platforms Required Qualifications Bachelor's or Master's degree in Computer Science or a related field 3+ years of experience in large-scale production-grade platform support, including participation in on-call rotations 3+ years of hands-on experience with cloud platforms like AWS, Azure, or GCP 2+ years of experience developing and supporting data pipelines using Apache Airflow including: DAG lifecycle management and scheduling best practices Troubleshooting task failures, scheduler issues, performance bottlenecks managing and error handling Strong programming proficiency in Python, especially for developing and troubleshooting RESTful APIs Working knowledge of Node.js is considered an added advantage 1+ years of experience in observability using the ELK stack (Elasticsearch, Logstash, Kibana) or Grafana Stack 2+ years of experience with DevOps and Infrastructure-as-Code tools such as GitHub, Jenkins, Docker, and Terraform 2+ years of hands-on experience with Kubernetes, including managing and debugging cluster resources and workloads within Amazon EKS Exposure to Agile and test-driven development a plus. Experience delivering projects in a highly collaborative, multi-discipline development team environment Desired Qualifications Experience with participating in projects in a highly collaborative, multi-discipline development team environment Exposure to Agile, ideally a strong background with the SAFe methodology Skill set on any monitoring or observability tool will be a value add.
Posted Date not available
6.0 - 11.0 years
20 - 30 Lacs
pune
Hybrid
Hi Candis, Please apply here as we have job openings in one of our MNC company chandrakala.c@i-q.co Role & responsibilities JD: Data Engineer (Python enterprise developer): 6+ years of experience in python scripting. Proficient in developing applications in Python language. Exposed to python-oriented Algorithms libraries such as NumPy, pandas, beautiful soup, Selenium, pdfplumber, Requests etc. Proficient in SQL programming, Postgres SQL. Knowledge on DevOps like CI/CD, Jenkins, Git. Experience working with AWS(S3) and Azure Databricks. Have experience in delivering project with Agile and Scrum methodology. Able to co-ordinate with Teams across multiple locations and time zones Strong interpersonal and communication skills with an ability to lead a team and keep them motivated. Mandatory Skills : Python, Postgres SQL, Azure Databricks, AWS(S3), Git, Azure DevOps CICD, Apache Airflow
Posted Date not available
4.0 - 9.0 years
16 - 27 Lacs
bengaluru
Remote
Role & responsibilities Understanding customer requirements and project KPIs Implementing various development, testing, automation tools, and IT infrastructure Planning the team structure, activities, and involvement in project management activities. Managing stakeholders and external interfaces Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for their adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible. Incidence management and root cause analysis Coordination and communication within the team and with customers Selecting and deploying appropriate CI/CD tools. Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Monitoring and measuring customer experience and KPIs Managing periodic reporting on the progress to the management and the customer Preferred candidate profile Bachelors degree in computer science or software engineering. Deep understanding of Private and Public Cloud Architectures. Experience in Azure Service, Terraform, Kubernetes, and Service Mesh Expertise with open stack technologies. Design and develop ETL pipelines (using Java, Scala, Python) Ingestion of data to and from RDBMS to Cassandra /PostgreSQL/ Yugabyte DB Manage jobs using Apache Airflow or Oozie. Adept with agile software development lifecycle and DevOps principles Prior Experience in Informatica PowerCenter or any other ETL tool. Excellent organizational, communication and presentation skills
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |