Home
Jobs

266 Athena Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 6 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Requirements- -3+ years of hands-on experience with AWS services including EMR, GLUE, Athena, Lambda, SQS, OpenSearch, CloudWatch, VPC, IAM, AWS Managed Airflow, security groups, S3, RDS, and DynamoDB. -Proficiency in Linux and experience with management tools like Apache Airflow and Terraform. Familiarity with CI/CD tools, particularly GitLab. Responsibilities- -Design, deploy, and maintain scalable and secure cloud and on-premises infrastructure. -Monitor and optimize performance and reliability of systems and applications. -Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. -Collaborate with development teams to integrate new applications and services into existing infrastructure. -Conduct regular security assessments and audits to ensure compliance with industry standards. -Provide support and troubleshooting assistance for infrastructure-related issues. -Create and maintain detailed documentation for infrastructure configurations and processes.

Posted 2 months ago

Apply

5 - 10 years

20 - 32 Lacs

Chennai, Bengaluru, Hyderabad

Hybrid

Naukri logo

Hiring for Senior AWS DATA ENGINEER with mandatory experience of AWS, Python, SQL, Pyspark, Airflow & Athena. Location: Bangalore/Hyderabad/ Chennai & Coimbatore. Hybrid mode (3days wfo)

Posted 2 months ago

Apply

5 - 10 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Your opportunity Do you love the transformative impact data can have on a businessAre you motivated to push for results and overcome all obstaclesThen we have a role for you. New Relic is looking for a Senior Data Engineer to help grow our global engineering team. What youll do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with industry trends, emerging technologies, and best practices in data engineering This role requires 5+ years of experience in BI and Data Warehousing. Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), including data modeling, data quality best practices, and self-service tooling. Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). Comfortable with SQL and related tooling Bonus points if you have Experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Data observability experience

Posted 2 months ago

Apply

9 - 14 years

35 - 45 Lacs

Hyderabad

Remote

Naukri logo

Senior Data Engineer (SQL, Python & AWS) Experience: 9-15 years Salary : INR 35,00,000-45,00,000 / year Preferred Notice Period : Within 30 Days Shift : 5:30AM to 2:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, ETL pipelines, PostgreSQL Aurora database, PowerBI, AWS, Python, RestAPI, SQL Good to have skills : Athena, Data Lake Architecture, Glue, Lambda, JSON, Redshift, Tableau Leading Proptech Company (One of Uplers' Clients) is Looking for: Data Engineer (WFH) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced Data Engineer to join our team of passionate professionals working on cutting-edge technology. In this role, you will be responsible for the ELT of our Company using Python, Airflow, and SQL within an AWS environment. Additionally, you will create and maintain data visualizations and dashboards using PowerBI, connecting to our SQL Server and PostgreSQL Aurora database through a gateway. This role requires strong critical thinking, the ability to assess data and outcomes, and proactive problem-solving skills. Responsibilities: Design, develop, and maintain ELT pipelines using Python, Airflow, and SQL in an AWS environment. Create and manage a data lake and data warehouse solutions on AWS. Develop and maintain data-driven dashboards and reporting solutions in PowerBI. ¢ Connect PowerBI to SQL Server and PostgreSQL Aurora databases using a gateway. ¢ Extract and integrate data from third-party APIs to populate the data lake. ¢ Perform data profiling and source system analysis to ensure data quality and integrity. ¢ Collaborate with business stakeholders to capture and understand data requirements. ¢ Implement industry best practices for data engineering and visualization. ¢ Participate in architectural decisions and contribute to the continuous improvement of data solutions. ¢ Follow agile practices and a Lean approach in project development. ¢ Critically assess the outcomes of your work to ensure they align with expectations before marking tasks as complete. ¢ Optimize SQL queries for performance and ensure efficient database operations. ¢ Perform database tuning and optimisation as needed. ¢ Proactively identify and present alternative solutions to achieve desired outcomes. ¢ Take ownership of end-to-end data-related demands from data extraction (whether from internal databases or third-party apps) to understanding the data, engaging with relevant people when needed, and delivering meaningful solutions. Required Skills and Experience: ¢ At least 9+ years of experience will be preferred. Strong critical thinking skills to assess outcomes, evaluate results, and suggest better alternatives where appropriate. ¢ Expert-level proficiency in SQL (TSQL, MS SQL) with a strong focus on optimizing queries for performance. ¢ Extensive experience with Python (including data-specific libraries) and Airflow for ELT processes. ¢ Proven ability to extract and manage data from third-party APIs. ¢ Proven experience in designing and developing data warehousing solutions on the AWS cloud platform. ¢ Strong expertise in PowerBI for data visualization and dashboard creation. ¢ Familiarity with connecting PowerBI to SQL Server and PostgreSQL Aurora databases. ¢ Experience with REST APIs and JSON ¢ Agile development experience with a focus on continuous delivery and improvement. ¢ Proactive mindset able to suggest alternative approaches to achieve goals efficiently. ¢ Excellent problem-solving skills and a proactive can-do attitude. ¢ Strong communication skills and the ability to work collaboratively in a team environment. ¢ Ability to independently assess data, outcomes, and potential gaps to ensure results align with business goals. ¢ Ability to perform database tuning and optimization to ensure efficient data operations. Desired Skills: ¢ Exposure to AWS Cloud Data Services such as RedShift, Athena, Lambda, Glue, etc. ¢ Experience with other reporting tools like Tableau. ¢ Knowledge of data lake architectures and best practices. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: We are a cloud-based residential sales platform designed to bridge the communication gap between clients, sales teams, and construction teams. Our goal is to ensure seamless collaboration, resulting in buildable and well-aligned residential projects. As builders with a strong tech foundation, we bring deep industry expertise to every solution we create. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 months ago

Apply

4 - 8 years

12 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

exp in Spark with Scala Hive & Big data technologies exp in Scala and object oriented concepts exp in HDFS, Spark, Hive and Oozie data models, data mining, and partitioning techniques exp with SQL database CI/CD tools (Maven, Git, Jenkins) and SONAR

Posted 2 months ago

Apply

6 - 11 years

11 - 19 Lacs

Pune

Work from Office

Naukri logo

Job location is Pune Hinjewadi. Monday to Friday Work from office. About company : leading provider of data management and analytics solutions, offering an AI-powered platform designed to help organizations efficiently manage, integrate, and analyze their data. Our comprehensive suite of tools includes robust data management capabilities, a powerful business rule engine, and advanced analytics features. By leveraging its platform, organizations can enhance their data-driven decision-making processes, improve operational efficiency, and drive business value through better insights and innovation. Headquarters - New York, New York Founded in 201 Specialties - data management, analytics, artificial intelligence, predictive analytics, machine learning, enterprise platform, financial services, banking, and insurance. Job Title: Senior Data Engineer Location: Pune Job Type: Full-Time Reports To: Data Engineering Manager / Director of Data Engineering Key Responsibilities: Data Architecture & Design: Develop and maintain scalable and efficient data pipelines to process large datasets. ETL Processes: Design, implement, and optimize ETL processes using modern tools and frameworks. Data Integration: Integrate data from various sources into a centralized data warehouse or data lake, ensuring data quality, security, and governance. Cloud Platforms: Design and implement data solutions on cloud platforms such as AWS, Azure, or GCP. Optimization: Identify and resolve performance bottlenecks in data processing and queries. Collaboration: Work closely with data scientists, analysts, and business teams to understand data requirements and deliver actionable insights. Tool Proficiency: Leverage Databricks or Snowflake for data management, transformation, and analytics. Mentorship: Provide technical leadership and mentorship to junior data engineers. Documentation: Maintain clear documentation for data processes, architectures, and systems. Required Skills & Qualifications: Experience: Minimum of 5 years of experience in data engineering or a related field, with a proven track record of building robust data pipelines. Data Warehousing & ETL: Strong knowledge of data warehousing concepts and experience with ETL tools such as Apache Airflow, Talend, or custom Python-based pipelines . Cloud Platforms: Hands-on experience with cloud technologies (AWS, Azure, or GCP) for data engineering and management. Databricks or Snowflake: Expertise in one of the following: Databricks: Experience with Spark, Delta Lake, and managing big data workloads. Snowflake: Proficiency in using Snowflake for data warehousing, data integration, and analytics. Programming Skills: Proficient in Python, SQL, and other data engineering languages (e.g., Java, Scala). Data Modeling: Experience in designing and implementing complex data models for large-scale applications. Data Governance & Quality: Knowledge of data governance best practices, including ensuring data quality, security, and compliance . Collaboration & Communication: Excellent communication skills, both technical and non-technical, to collaborate with cross-functional teams. Preferred Qualifications: Experience with containerization (Docker, Kubernetes). Familiarity with machine learning workflows or AI pipelines. Strong understanding of DevOps principles for continuous integration and deployment (CI/CD). Benefits: Competitive salary and performance-based bonuses Health, dental, and vision insurance 401(k) plan with company match Flexible work hours and remote work options Career growth opportunities and professional development support

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bangalore Rural

Hybrid

Naukri logo

Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bengaluru

Hybrid

Naukri logo

Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bangalore Rural

Hybrid

Naukri logo

We are preferring employees from Product organisation and premium Engineering Institutes. We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

2 - 7 years

27 - 42 Lacs

Bengaluru

Hybrid

Naukri logo

We are preferring employees from Product organisation and premium Engineering Institutes. We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling

Posted 2 months ago

Apply

3 - 7 years

6 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities:1. Design and implement scalable, high-performance data pipelines using AWS services2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake4. Create and manage analytics solutions using Amazon Athena and Redshift5. Design and implement database solutions using Aurora, RDS, and DynamoDB6. Develop serverless workflows using AWS Step Functions7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards9. Collaborate with data scientists and analysts to support their data needs10. Optimize data architecture for performance and cost-efficiency11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions- Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics- ETL/ELT processes- Data Lake architectures Version control: Git- Agile methodologies

Posted 2 months ago

Apply

3 - 8 years

6 - 16 Lacs

Mumbai

Work from Office

Naukri logo

Job Description AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 months ago

Apply

3 - 8 years

6 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description AWS Data engineer We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 3 to 7 years Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 months ago

Apply

6 - 11 years

9 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

worked on implementations including base services, enhancements and break fixes. Worked on various systems like Athena, Ariba, Right-fax, eBuy, Kinaxis, VMF. Worked on Articulation processes and IDocs. Worked in SNOW, BMC remedy ticketing tool.

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data IntegrationIntegrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and OptimizationAutomate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

3 - 5 years

8 - 14 Lacs

Delhi NCR, Mumbai, Bengaluru

Hybrid

Naukri logo

Responsibilities :- Collaborate with stakeholders to understand business requirements and data needs, and translate them into scalable and efficient data engineering solutions using AWS Data Services.- Design, develop, and maintain data pipelines using AWS serverless technologies such as- Glue, S3, Lambda, DynamoDB, Athena, and RedShift.- Implement data modeling techniques to optimize data storage and retrieval processes.- Develop and deploy data processing and transformation frameworks to support both real- time and batch processing requirements.- Ensure data pipelines are scalable, reliable, and performant to handle large-scale data sizes.- Implement data documentation and observability tools and practices to monitor . Hands on experience of Spark, Scala and conversant with SQL (Scala +AWS is mandatory)Good knowledge on Hadoop (Oozie)Reverse engineer the SQL queries, Scala code to understand functionalityCapable of identifying, analysing and interpret patterns and trends in complex data sets5. Should have strong experience on AWS (EMR, S3)Has worked on creating database design, data models & techniques for data mining. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 2 months ago

Apply

2 - 5 years

3 - 7 Lacs

Karnataka

Work from Office

Naukri logo

EXP 4 to 6 yrs Location Any PSL Location Rate below 14$ JD - DBT/AWS Glue/Python/Pyspark Hands-on experience in data engineering, with expertise in DBT/AWS Glue/Python/Pyspark. Strong knowledge of data engineering concepts, data pipelines, ETL/ELT processes, and cloud data environments (AWS) Technology DBT, AWS Glue, Athena, SQL, Spark, PySpark Good understanding of Spark internals and how it works. Goot skills in PySpark Good understanding of DBT basically should be to understand DBT limitations and when it will end-up in model explosion Good hands-on experience in AWS Glue AWS expertise should know different services and should know how to configure them and infra-as-code experience Basic understanding of different open data formats Delta, Iceberg, Hudi Ability to engage in technical conversations and suggest enhancements to the current Architecture and design"

Posted 2 months ago

Apply

1 - 2 years

2 - 5 Lacs

Karnataka

Work from Office

Naukri logo

EXP 4 to 6 yrs Location Any PSL Location Rate below 14$ JD - DBT/AWS Glue/Python/Pyspark Hands-on experience in data engineering, with expertise in DBT/AWS Glue/Python/Pyspark. Strong knowledge of data engineering concepts, data pipelines, ETL/ELT processes, and cloud data environments (AWS) Technology DBT, AWS Glue, Athena, SQL, Spark, PySpark Good understanding of Spark internals and how it works. Goot skills in PySpark Good understanding of DBT basically should be to understand DBT limitations and when it will end-up in model explosion Good hands-on experience in AWS Glue AWS expertise should know different services and should know how to configure them and infra-as-code experience Basic understanding of different open data formats Delta, Iceberg, Hudi Ability to engage in technical conversations and suggest enhancements to the current Architecture and design"

Posted 2 months ago

Apply

11 - 15 years

40 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

Software Engineering Advisor Data Governance, Data Model, Data Migration, Automation Position Overview: The job profile for this position is Software Engineering Advisor, which is a Band 4 Contributor Career Track Role. Excited to grow your career ? We value our talented employees, and whenever possible strive to help one of our associates grow professionally before recruiting new talent to our open positions. If you think the open position, you see is right for you, we encourage you to apply! Our people make all the difference in our success. We are looking for exceptional Data Model, Data Governance, and Data Migration experts including expertise in automation, in our PBM Plus Technology organization. This role requires ensuring data integrity, efficiency, and compliance across the project as well as designing and implementing robust data governance frameworks, leading data migration projects, and developing automation scripts to enhance data processing and management. This role involves working with critical data across customer, provider, claims, and benefits domain to ensure comprehensive data solutions and high-quality deliverables. and deploying in on prem and/or AWS infrastructure using the technologies listed below. They are expected to work closely with Subject Matter Experts, developers, and business stakeholders to ensure that application solutions meet business/customer requirements. Responsibilities: Data Governance: Design and implement comprehensive data governance frameworks and policies. Ensure adherence to data governance standards and best practices across the organization. Collaborate with data stewards and stakeholders to enforce data policies and procedures. Data Modeling: Develop and maintain logical and physical data models for enterprise data warehouses, data lakes and data marts. Ensure data models are optimized for performance and scalability. Document data models and maintain metadata repositories. Data Migration: Lead data migration projects, ensuring data accuracy, consistency, and completeness. Develop and execute data migration strategies and plans. Perform data extraction, transformation, and loading (ETL) using industry-standard tools. Automation: Develop automation scripts and tools to streamline data processing and management tasks. Implement automated data quality checks and validations. Continuously improve and optimize data automation processes. Collaboration: Work closely with data architects, data analysts, and other IT teams to ensure seamless data integration and consistency. Provide technical guidance and support to junior team members. Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Works with both onsite (Scrum Master, Product, QA and Developers) and on and offshore team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Qualifications Required Skills: Extensive experience in data modeling, governance, and migration and proficient in data management tools (e.g. Erwin, etc.) Technology Stack: Python, Py-Spark, Lambda, AWS Glue, Redshift, Athena, SQL Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java/Python and programming and scripting skills (Python, SQL, Shell scripting) Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 11+ years of experience Experience with vendor management in an onshore/offshore model, including managing SLAs and contracts with third-party vendors Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/business areas or equivalent work experience. Industry certifications (e.g. CDMP, DGSP, etc.)

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Data Engineer with expertise in DBT (Data Build Tool) to join our dynamic and fast-growing team. In this role, you will be responsible for transforming critical data, specifically focusing on the creation and management of silver and gold data tiers within our data pipeline. Working closely with data architects and engineers, you will help design, develop, and optimize data transformation processes, ensuring that data is clean, reliable, and ready for business intelligence and analytics teams. Key Responsibilities: DBT-Based Data Transformations : Lead the design, development, and implementation of data transformations using DBT, with a focus on creating and managing silver and gold data tiers within the pipeline. Data Workflow Management : Oversee DBT workflows from data ingestion to transformation and final storage in optimized data models. Ensure seamless integration between DBT models and source systems. Integration with Data Adapters : Work with data adapters like AWS Glue , Amazon Athena , and Amazon Redshift to ensure smooth data flow and transformation across platforms. Data Quality & Optimization : Implement best practices to ensure data transformations are efficient, scalable, and maintainable. Optimize data models for query performance and reduced processing time. Cross-Functional Collaboration : Collaborate with data analysts, business intelligence teams, and data architects to understand data needs and deliver high-quality datasets for analytics and reporting. Documentation & Best Practices : Develop and maintain comprehensive documentation for DBT models, workflows, and configurations. Establish and enforce best practices in data engineering. Data Warehousing Concepts : Apply core data warehousing principles, including star schema , dimensional modeling , ETL processes, and data governance, to build efficient data pipelines and structures. Required Skills & Qualifications: DBT Expertise : Strong hands-on experience with DBT for data transformations and managing data models, including advanced DBT concepts like incremental models , snapshots , and macros . ETL and Cloud Integration : Proven experience with cloud data platforms, particularly AWS, and tools like AWS Glue , Amazon Athena , and Amazon Redshift for data extraction, transformation, and loading (ETL). Data Modeling Knowledge : Solid understanding of data warehousing principles, including dimensional modeling, star schemas, fact tables, and data governance. SQL Expertise : Proficient in writing and optimizing complex SQL queries for data manipulation, transformation, and reporting. Version Control : Experience with Git or similar version control systems for code management and collaboration. Data Orchestration : Familiarity with orchestration tools like Apache Airflow for managing ETL workflows. Data Pipeline Monitoring : Experience with monitoring and ing tools for data pipelines. Additional Tools : Familiarity with other data transformation tools or languages such as Apache Spark , Python , or Pandas is a plus. Required Skills Data build tool, PySpark

Posted 2 months ago

Apply

3 - 5 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Project Role : Cloud Migration Engineer Project Role Description : Provides assessment of existing solutions and infrastructure to migrate to the cloud. Plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Must have skills : AWS CloudFormation Good to have skills : AWS Athena, AWS Redshift Minimum 3 year(s) of experience is required Educational Qualification : BE Summary :As a Cloud Migration Engineer, you will be responsible for assessing existing solutions and infrastructure to migrate to the cloud. Your typical day will involve planning, delivering, and implementing application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Roles & Responsibilities: Lead the assessment of existing solutions and infrastructure to migrate to the cloud. Plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies. Collaborate with cross-functional teams to ensure successful cloud migration. Provide technical guidance and support to project teams and stakeholders. Stay updated with the latest advancements in cloud technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in AWS CloudFormation. Good To Have Skills:Experience with AWS Redshift and AWS Athena. Strong understanding of cloud migration and implementation. Experience with private and public cloud technologies. Solid grasp of infrastructure as code and automation. Experience with DevOps practices and tools. Additional Information: The candidate should have a minimum of 3 years of experience in AWS CloudFormation. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful cloud migration solutions. This position is based at our Pune office. Qualification BE

Posted 2 months ago

Apply

6 - 11 years

8 - 14 Lacs

Pune

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 2 months ago

Apply

Exploring Athena Jobs in India

India's job market for athena professionals is thriving, with numerous opportunities available for individuals skilled in this area. From entry-level positions to senior roles, companies across various industries are actively seeking talent with expertise in athena to drive their businesses forward.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for athena professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 4-7 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.

Career Path

In the field of athena, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually reaching positions like Architect or Manager. Continuous learning and upskilling are essential to advance in this field.

Related Skills

Apart from proficiency in athena, professionals in this field are often expected to have skills such as SQL, data analysis, data visualization, AWS, and Python. Strong problem-solving abilities and attention to detail are also highly valued in athena roles.

Interview Questions

  • What is Amazon Athena and how does it differ from traditional databases? (medium)
  • Can you explain how partitioning works in Athena? (advanced)
  • How do you optimize queries in Athena for better performance? (medium)
  • What are the best practices for managing data in Athena? (basic)
  • Have you worked with complex joins in Athena? Can you provide an example? (medium)
  • What is the difference between Amazon Redshift and Amazon Athena? (advanced)
  • How do you handle errors and exceptions in Athena queries? (medium)
  • Have you used User Defined Functions (UDFs) in Athena? If yes, explain a scenario where you implemented them. (advanced)
  • How do you schedule queries in Athena for automated execution? (medium)
  • Can you explain the different data types supported by Athena? (basic)
  • What security measures do you implement to protect sensitive data in Athena? (medium)
  • Have you worked with nested data structures in Athena? If yes, share your experience. (advanced)
  • How do you troubleshoot performance issues in Athena queries? (medium)
  • What is the significance of query caching in Athena and how does it work? (medium)
  • Can you explain the concept of query federation in Athena? (advanced)
  • How do you handle large datasets in Athena efficiently? (medium)
  • Have you integrated Athena with other AWS services? If yes, describe the integration process. (advanced)
  • How do you monitor query performance in Athena? (medium)
  • What are the limitations of Amazon Athena? (basic)
  • Have you worked on cost optimization strategies for Athena queries? If yes, share your approach. (advanced)
  • How do you ensure data security and compliance in Athena? (medium)
  • Can you explain the difference between serverless and provisioned query execution in Athena? (medium)
  • How do you handle complex data transformation tasks in Athena? (medium)
  • Have you implemented data lake architecture using Athena? If yes, describe the process. (advanced)

Closing Remark

As you explore opportunities in the athena job market in India, remember to showcase your expertise, skills, and enthusiasm for the field during interviews. With the right preparation and confidence, you can land your dream job in this dynamic and rewarding industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies