Home
Jobs

266 Athena Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1 - 3 years

3 - 8 Lacs

Pune, Surat

Work from Office

Naukri logo

About the Role: We are seeking a skilled QuickSight Developer to design and implement advanced business intelligence (BI) solutions using Amazon QuickSight. The ideal candidate will have a strong background in data visualization, data modeling, and analytics, with the ability to translate complex data into actionable insights. Key Responsibilities Develop, design, and maintain interactive dashboards and reports using Amazon QuickSight. Collaborate with stakeholders to gather requirements and translate them into actionable BI solutions. Optimize and enhance existing QuickSight dashboards for performance and usability. Connect and integrate QuickSight with various data sources such as Redshift, S3, RDS, and other AWS services. Build robust data models to support analytical requirements. Ensure data quality, consistency, and accuracy in all reporting and visualization efforts. Provide training and support to end-users for effective use of QuickSight dashboards and reports. Stay updated on QuickSight features and industry best practices to implement innovative BI solutions. Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Proven experience in designing and developing dashboards using Amazon QuickSight . Strong understanding of data visualization principles and best practices. Hands-on experience with AWS services, especially Redshift, S3, RDS, and Athena. Proficiency in SQL for data extraction and transformation. Experience with ETL processes and data modeling techniques. Familiarity with scripting languages like Python or tools like AWS Glue is a plus. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Preferred Skills Experience in BI tools like Tableau, Power BI, or Looker. Knowledge of data governance and security best practices in AWS. Ability to handle large datasets and optimize queries for performance. Certifications in AWS or data analytics are an added advantage. What We Offer Opportunity to work on cutting-edge BI solutions. Competitive salary and benefits package. A collaborative and innovative work environment. Professional development and training opportunities.

Posted 3 months ago

Apply

2 - 3 years

0 - 0 Lacs

Mumbai

Work from Office

Naukri logo

Job Title: Product Engineer - Big Data Location: Mumbai Experience: 3 - 8 Yrs Job Summary: As a Product Engineer - Big Data , you will be responsible for designing, building, and optimizing large-scale data processing pipelines using cutting-edge Big Data technologies. Collaborating with cross-functional teams--including data scientists, analysts, and product managers--you will ensure data is easily accessible, secure, and reliable. Your role will focus on delivering high-quality, scalable solutions for data storage, ingestion, and analysis, while driving continuous improvements throughout the data lifecycle. Key Responsibilities: ETL Pipeline Development & Optimization: Design and implement complex end-to-end ETL pipelines to handle large-scale data ingestion and processing. Utilize AWS services like EMR, Glue, S3, MSK (Managed Streaming for Kafka), DMS (Database Migration Service), Athena, and EC2 to streamline data workflows, ensuring high availability and reliability. Big Data Processing: Develop and optimize real-time and batch data processing systems using Apache Flink, PySpark, and Apache Kafka . Focus on fault tolerance, scalability, and performance. Work with Apache Hudi for managing datasets and enabling incremental data processing. Data Modeling & Warehousing: Design and implement data warehouse solutions that support both analytical and operational use cases. Model complex datasets into optimized structures for high performance, easy access, and query efficiency for internal stakeholders. Cloud Infrastructure Development: Build scalable cloud-based data infrastructure leveraging AWS tools. Ensure data pipelines are resilient and adaptable to changes in data volume and variety, while optimizing costs and maximizing efficiency using Managed Apache Airflow for orchestration and EC2 for compute resources. Data Analysis & Insights: Collaborate with business teams and data scientists to understand data needs and deliver high-quality datasets. Conduct in-depth analysis to derive insights from the data, identifying key trends, patterns, and anomalies to drive business decisions. Present findings in a clear, actionable format. Real-time & Batch Data Integration: Enable seamless integration of real-time streaming and batch data from systems like AWS MSK . Ensure consistency in data ingestion and processing across various formats and sources, providing a unified view of the data ecosystem. CI/CD & Automation: Use Jenkins to establish and maintain continuous integration and delivery pipelines. Implement automated testing and deployment workflows, ensuring smooth integration of new features and updates into production environments. Data Security & Compliance: Collaborate with security teams to ensure data pipelines comply with organizational and regulatory standards such as GDPR, HIPAA , or other relevant frameworks. Implement data governance practices to ensure integrity, security, and traceability throughout the data lifecycle. Collaboration & Cross-Functional Work: Partner with engineers, data scientists, product managers, and business stakeholders to understand data requirements and deliver scalable solutions. Participate in agile teams, sprint planning, and architectural discussions. Troubleshooting & Performance Tuning: Identify and resolve performance bottlenecks in data pipelines. Ensure optimal performance through proactive monitoring, tuning, and applying best practices for data ingestion and storage. Skills & Qualifications: Must-Have Skills: AWS Expertise: Hands-on experience with core AWS services related to Big Data, including EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, Athena, and EC2 . Strong understanding of cloud-native data architecture. Big Data Technologies: Proficiency in PySpark and SQL for data transformations and analysis. Experience with large-scale data processing frameworks like Apache Flink and Apache Kafka . Data Frameworks: Strong knowledge of Apache Hudi for data lake operations, including CDC (Change Data Capture) and incremental data processing. Database Modeling & Data Warehousing: Expertise in designing scalable data models for both OLAP and OLTP systems. In-depth understanding of data warehousing best practices. ETL Pipeline Development: Proven experience in building robust, scalable ETL pipelines for processing real-time and batch data across platforms. Data Analysis & Insights: Strong problem-solving skills with a data-driven approach to decision-making. Ability to conduct complex data analysis to extract actionable business insights. CI/CD & Automation: Basic to intermediate knowledge of CI/CD pipelines using Jenkins or similar tools to automate deployment and monitoring of data pipelines. Required Skills Big Data,Etl, AWS

Posted 3 months ago

Apply

5 - 10 years

8 - 18 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

About Client Hiring for One of Our Multinational Corporations! Job Description Job Title : Data Engineer Relevant Experience : 5 to 8 years Must Have Skills : Python (advanced proficiency) PySpark AWS (Amazon Web Services) , including services like S3, Lambda, EC2, and CloudWatch. SQL (advanced skills in writing complex queries) AWS Glue (experience in ETL processing and managing data pipelines). Good to Have Skills : Familiarity with Big Data frameworks (e.g., Hadoop, Spark) Experience with Data Warehousing Knowledge of Containerization and orchestration (Docker, Kubernetes) Experience with CI/CD pipelines Understanding of Machine Learning concepts and tools. Roles and Responsibilities : Design, develop, and implement data pipelines for processing and transforming large datasets. Work closely with data scientists, analysts, and other stakeholders to ensure data availability, integrity, and performance. Manage and optimize data workflows on AWS using services like Glue, S3, and Lambda. Collaborate in the design of cloud-based architecture and ensure scalability and efficiency of data processing systems. Troubleshoot and resolve data-related issues, ensuring high-quality data delivery. Maintain data security and governance best practices. Location : Bangalore, Hyderabad, Chennai. Notice Period : Immediate to 15 days preferred. Nushiba Taniya M HR Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. 08067432408 |Nushiba@blackwhite.in|www.blackwhite.in

Posted 3 months ago

Apply

4 - 9 years

20 - 35 Lacs

Pune

Hybrid

Naukri logo

Job Title: AWS Data Engineer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title:AWS Data Engineer Skill set: Design, develop, test, deploy, and maintain large-scale data pipelines using AWS Glue. Good Understanding of Spark/Pyspark Collaborate with cross-functional teams to gather requirements and deliver high-quality ETL solutions. Understanding and technical knowledge on AWS service like EC2, S3. Should have used these technologies in previously executed projects. Strong AWS development experience for data ETL/pipeline/integration/automation work. Should have a deep understanding of BI & Analytics Solution development lifecycle. Should have good understanding of AWS Services like Redshift, Glue, Lambda, Athena, S3, EC2

Posted 3 months ago

Apply

12 - 18 years

25 - 40 Lacs

Pune

Hybrid

Naukri logo

Job Title: Manager-AWS Data Engineer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title: Manager-AWS Data Engineer Skill set: Design, develop, test, deploy, and maintain large-scale data pipelines using AWS Glue. Good Understanding of Spark/Pyspark Collaborate with cross-functional teams to gather requirements and deliver high-quality ETL solutions. Understanding and technical knowledge on AWS service like EC2, S3. Should have used these technologies in previously executed projects. Strong AWS development experience for data ETL/pipeline/integration/automation work. Should have a deep understanding of BI & Analytics Solution development lifecycle. Should have good understanding of AWS Services like Redshift, Glue, Lambda, Athena, S3, EC2

Posted 3 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 3 months ago

Apply

2 - 6 years

7 - 11 Lacs

Kolkata

Work from Office

Naukri logo

Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills :AWS Glue

Posted 3 months ago

Apply

8 - 13 years

15 - 25 Lacs

Chennai, Pune, Bengaluru

Work from Office

Naukri logo

#Hiring #AWSClouddeveloper #DataEngineering Hello Connections....! Hope you are doing well !!! We are hiring for Level 5 MNC Client Role : AWS Cloud developer (C2h (Contract)) Exp : 8+ Years Notice Period : Immediate-20 Days location : Pune/Trivandrum(WFO) MOH : C2h (Contract) Interested candidates can share updated cv (or) any references to mansoor@burgeonits.com Job Description:- Key Responsibilities: Design, develop, and implement data pipelines and #ETL processes using AWS services. Collaborate directly with our #FinOps teams and market units across the organization to understand data requirements and deliver solutions that meet business needs. Optimize and manage data storage solutions using AWS services such as S3, RDS, and NoSQL databases. Ensure data quality, integrity, and security across all data engineering projects. Monitor and troubleshoot data workflows to ensure high availability and performance. Design and build advanced and interactive dashboards using tools such as AWS Quick Sight and Power BI. Create and oversee a cloud billing dashboard to track, manage and optimize cloud costs and Reserved Instance purchases Build a dashboard that provides secure selfservice capabilities to all teams on cloud spend Knowledge of DevOps practices and CI/CD pipelines. Stay updated with the latest AWS technologies and best practices to continuously improve our data infrastructure. Solve technical problems and create viable tooling Design and implement shared services in cloud infrastructure. Use best appropriate infrastructure automation tools to provision cloud infrastructure components Attend important Agile events and finish assigned work packages / tasks Ensure smooth handover of project deliverables to internal and external customers Actively contributing to internal projects such as tooling and documentation Skills: Proven experience as a Cloud Developer or #DataEngineer , with a focus on AWS. Strong proficiency in AWS services #EKS , #EC2 , #S3 , #Lambda , #Glue . Solid understanding of data modeling, #ETL processes, and data warehousing concepts. Strong proficiency in #Python Familiarity with infrastructure as code tools like #CloudFormation or #Terraform . Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Experience in building and utilizing #REST APIs. Experience building and running #Kubernetes Background Tasks with Batch-Jobs. Hands on experience #GitHub Action Preferred Qualifications: AWS Certified DevOps Engineer - Professional and or AWS Certified Data Engineer - Associate Knowledge of DevOps practices and #CICD pipelines. Knowledge in using #Flask framework to create dynamic websites, APIs or microservices is a plus.

Posted 3 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 3 months ago

Apply

7 - 8 years

18 - 33 Lacs

Pune, Bengaluru, Hyderabad

Work from Office

Naukri logo

About KPI Partners KPI Partners is a 5 times Gartner recognized data, analytics, and AI consulting company. We are one of the top data, analytics and AI partners for Microsoft. AWS, Google, Snowflake and Databricks. Founded in 2006, KPI has over 500 consultants and has successfully delivered over 1,000 projects to our clients in the US. We are looking for skilled data engineers who want to work with the best team in data engineering! About the Role: We are seeking highly skilled and experienced Data Engineers to join our dynamic team at KPI's Hyderabad,Pune ,Bangalore Offices You will work on challenging and multi-year data transformation projects for our clients. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: Design and build data engineering pipelines using SQL and PySpark Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them Experience in agile delivery methodology in a leading role as part of a wider team Strong team collaboration and experience working with KPI team members and client team members in the US, India and other global locations Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: 3+ years of PySpark, SQL experience in building data engineering pipelines. Proven expertise in SQL for querying, manipulating, and analyzing large datasets AWS , Data bricks Good-to-Have Skills: Databricks Certification is a plus Azure Certification is a plus Snowflake Certification is a plus Education: BA/BS in Computer Science, Math, Physics, or other technical fields is a plus. Apply Now! If youre ready to take on a key role in data engineering and work on transformative projects with a talented team, we encourage you to apply today.

Posted 3 months ago

Apply

5 - 8 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

JPC REQ- JPC REQ- SR.AWS DATA ENGINEER--- BANGALORE, CHENNAI, HYDERABAD, pune--5 + years----40 LPA------ imm-60 days---Arun Dharne -----congnizant----sharvin--Perm

Posted 3 months ago

Apply

6 - 10 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 3 months ago

Apply

4 - 9 years

6 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

About The Role : Ab Initio Skills:Graph Development, Ab Initio standard environment parameters, GD(PDL,MFS Concepts)E, EME basics, SDLC, Data Analysis Database:SQL Proficient, DB Load / Unload Utilities expert, relevant experience in Oracle, DB2, Teradata (Preferred) UNIX:Shell Scripting (must), Unix utilities like sed, awk, perl, python Scheduling knowledge (Control M, Autosys, Maestro, TWS, ESP) Project Profiles:Atleast 2-3 Source Systems, Multiple Targets, simple business transformations with daily, monthly Expected to produce LLD, work with testers, work with PMO and develop graphs, schedules, 3rd level support Should have hands on development experience with various Ab Initio components such as Rollup Scan, join Partition, by key Partition, by Round Robin. Gather, Merge, Interleave Lookup etc Experience in finance and ideally capital markets products. Requires experience in development and support of complex frameworks to handle multiple data ingestion patterns.e.g, messaging files,hierarchical polymorphic xml structures conformance of data to a canonical model curation and distribution of data QA Resource. Data modeling experience creating CDMs LDMs PDMs using tools like ERWIN, Power designer or MagicDraw. Detailed knowledge of the capital markets including derivatives products IRS CDS Options structured products and Fixed Income products. Knowledge on Jenkins and CICD concepts. Knowledge on scheduling tool like Autosys and Control Center. Demonstrated understanding of how AbInitio applications and systems interact with the underlying hardware ecosystem. Experience working in an agile project development lifecycle. Strong in depth knowledge of databases and database concepts DB2 knowledge is a plus Primary Skills Abinitio Graphs Secondary Skills SQL Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 3 months ago

Apply

4 - 9 years

25 - 30 Lacs

Chennai, Delhi NCR, Bengaluru

Work from Office

Naukri logo

We're seeking an experienced Data Engineer to join our team on a contract basis. The ideal candidate will design, develop, and maintain data pipelines and architectures using Fivetran, Airflow, AWS, and other technologies. Responsibilities: Design, build, and maintain data pipelines using Fivetran, Airflow, and AWS Glue Develop and optimize data warehousing solutions using Amazon Redshift Implement data transformation and loading processes using AWS Athena and SQL Ensure data quality, security, and compliance Collaborate with cross-functional teams to integrate data solutions Troubleshoot and optimize data pipeline performance Implement data governance and monitoring using AWS services Requirements: 7-10 years of experience in data engineering Strong expertise in: Fivetran Airflow DB2 AWS (Glue, Athena, Redshift, Lambda) Python SQL Experience with data warehousing, ETL, and data pipeline design Strong problem-solving and analytical skills Excellent communication and collaboration skills Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 months ago

Apply

4 - 8 years

5 - 15 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Experience: Data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Proficiency in Databricks platform, management, and optimization. Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS. Proven experience in data engineering performance tuning and analytical understanding in business and program contexts. Solid experience in Python development, specifically in pySpark within the AWS Cloud environment, including experience with Terraform. Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying. Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration. Understanding of continuous deployment (CI/CD) processes. Experience with Airflow and additional Apache Spark knowledge is advantageous. Exposure to ETL tools, including Informatica. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development. Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment. Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem. Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters. Engage with and support business teams to deliver data and analytics projects effectively. Manage source control systems and utilize Jenkins for continuous integration. Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks. PAN India

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Mumbai

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Bangalore Rural

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Hyderabad

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

6 - 11 years

10 - 12 Lacs

Pune

Work from Office

Naukri logo

We are looking for highly skilled Data Engineers to join our team for a long-term offshore position. The ideal candidates will have 5+ years of experience in Data Engineering, with a strong focus on Python and programming. The role requires proficiency in leveraging AWS services to build efficient, cost-effective datasets that support Business Reporting and AI/ML Exploration. Candidates must demonstrate the ability to functionally understand the Client Requirements and deliver Optimized Datasets for multiple Downstream Applications. The selected individuals will work under the guidance of an Lead from Onsite and closely with Client Stakeholders to meet business objectives. Key Responsibilities Cloud Infrastructure: Design and implement scalable, cost-effective data pipelines on the AWS platform using services like S3, Athena, Glue, RDS, etc. Manage and optimize data storage strategies for efficient retrieval and integration with other applications. Support the ingestion and transformation of large datasets for reporting and analytics. Tooling and Automation: Develop and maintain automation scripts using Python to streamline data processing workflows. Integrate tools and frameworks like PySpark to optimize performance and resource utilization. Implement monitoring and error-handling mechanisms to ensure reliability and scalability. Collaboration and Communication: Work closely with the onsite lead and client teams to gather and understand functional requirements. Collaborate with business stakeholders and the Data Science team to provide datasets suitable for reporting and AI/ML exploration. Document processes, provide regular updates, and ensure transparency in deliverables. Data Analysis and Reporting: Optimize AWS service utilization to maintain cost-efficiency while meeting performance requirements. Provide insights on data usage trends and support the development of reporting dashboards for cloud costs. Security and Compliance: Ensure secure handling of sensitive data with encryption (e.g., AES-256, TLS) and role-based access control using AWS IAM. Maintain compliance with organizational and industry regulations. Required Skills: 5+ years of experience in Data Engineering with a strong emphasis on AWS platforms. Hands-on expertise with AWS services such as S3, Glue, Athena, RDS, etc. Proficiency in Python and for building Data Pipelines for ingesting data and integrating it across applications. Demonstrated ability to design and develop scalable Data Pipelines and Workflows. Strong problem-solving skills and the ability to troubleshoot complex data issues. Preferred Skills: Experience with Big Data technologies, including Spark, Kafka, and Scala, for Distributed Data processing. Hands-on expertise in working with AWS Big Data services such as EMR, DynamoDB, Athena, Glue, and MSK (Managed Streaming for Kafka). Familiarity with on-premises Big Data platforms and tools for Data Processing and Streaming. Proficiency in scheduling data workflows using Apache Airflow or similar orchestration tools like One Automation, Control-M, etc. Strong understanding of DevOps practices, including CI/CD pipelines and Automation Tools. Prior experience in the Telecommunications Domain, with a focus on large-scale data systems and workflows. AWS certifications (e.g., Solutions Architect, Data Analytics Specialty) are a plus.

Posted 3 months ago

Apply

1 - 6 years

3 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: We are looking for a talented, motivated leader with experience in building Scalable Cloud Services, Infrastructure, and processes. As part of the IoT (Internet of Things) team you will be working on the next generation of IoT products. As a Business Intelligence Engineer (BIE) In This Role The ideal candidate will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. You will have deep expertise in gathering requirements and insights, mining large and diverse data sets, data visualization, writing complex SQL queries, building rapid prototype using Python/ R and generating insights that enable senior leaders to make critical business decisions.Key job responsibilities You will utilize your deep expertise in business analysis, metrics, reporting, and analytic tools/languages like SQL, Excel, and others, to translate data into meaningful insights through collaboration with scientists, software engineers, data engineers and business analysts. You will have end-to-end ownership of operational, financial, and technical aspects of the insights you are building for the business, and will play an integral role in strategic decision-making. Conduct deep dive analyses of business problems and formulate conclusions and recommendations to be presented to senior leadership Produce recommendations and insights that will help shape effective metric development and reporting for key stakeholders Simplify and automate reporting, audits and other data-driven activities Partner with Engineering teams to enhance data infrastructure, data availability, and broad access to customer insights To develop and drive best practices in reporting and analysis:data integrity, test design, analysis, validation, and documentation Learn new technology and techniques to meaningfully support product and process innovation BASIC QUALIFICATIONS At least 1+ years of experience using SQL to query data from databases/data warehouses/cloud data sources/etc. (e.g., Redshift, MySQL, PostgreSQL, MS SQL Server, BigQuery, etc.). Experience with data visualization using Tableau, Power BI, Quicksight, or similar tools. Bachelors degree in Statistics, Economics, Math, Finance, Engineering, Computer Science, Information Systems, or a related quantitative field. Ability to operate successfully and independently in a fast-paced environment. Comfort with ambiguity and eagerness to learn new skills. Knowledge of Cloud Services AWS, GCP and/or Azure is a must PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, Athena, Glue, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience with creating and building predictive/optimization tools that benefit the business and improve customer experience Experience articulating business questions and using quantitative techniques to drive insights for business. Experience in dealing with technical and non-technical senior level managers. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants :Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies :Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Preferred Qualifications 3+ years of experience as a ML Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid LLM experience Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools, platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Understanding of Machine Learning; Prior experience in ML Engineering a must Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred Qualifications:Masters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 6+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 5+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. Applicants :Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies :Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 months ago

Apply

7 - 12 years

10 - 20 Lacs

Pune

Remote

Naukri logo

We are looking for a Senior Data Engineer to join our team and help build a robust, scalable data platform that powers real estate solutions. If you have a passion for working with big data, real-time processing, and scalable architectures,.

Posted 3 months ago

Apply

12 - 20 years

27 - 42 Lacs

Trivandrum, Bengaluru, Hyderabad

Work from Office

Naukri logo

Hiring For AWS Big Data Architect who can Join Immediately with one of our client. Role : Big Data Architect / AWS Big Data Architect Experience : 12+ Years Locations : Hyderabad , Bangalore , Gurugram, Kochi , Trivandrum Notice Period : Immediate Joiners Shift Timings : overlap with UK timings ( 2-11 PM IST) Notice Period : Immediate Joiners / Serving Notice with in 30 Days Required Skills & Qualifications : 12+ years of experience in Big Data architecture and engineering. Strong expertise in AWS (DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc.). Hands-on experience with Debezium and Kafka for real-time data streaming and synchronization. Proficiency in Spark optimization for batch processing improvements. Strong SQL and Oracle query optimization experience. Expertise in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena, etc.). Experience in CI/CD automation and integrating AWS services with DevOps pipelines. Strong problem-solving skills and ability to work in an Agile environment. Preferred Skills (Good to Have): • Experience with Dremio to Athena migrations. • Exposure to cloud-native DR solutions on AWS. • Strong analytical skills to document and implement performance improvements More details to Contact to me : 9000336401 Mail ID :chandana.n@kksoftwareassociates.com For More Job Alerts Please Do Follow : https://lnkd.in/gHMuPUXW

Posted 3 months ago

Apply

Exploring Athena Jobs in India

India's job market for athena professionals is thriving, with numerous opportunities available for individuals skilled in this area. From entry-level positions to senior roles, companies across various industries are actively seeking talent with expertise in athena to drive their businesses forward.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for athena professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 4-7 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.

Career Path

In the field of athena, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually reaching positions like Architect or Manager. Continuous learning and upskilling are essential to advance in this field.

Related Skills

Apart from proficiency in athena, professionals in this field are often expected to have skills such as SQL, data analysis, data visualization, AWS, and Python. Strong problem-solving abilities and attention to detail are also highly valued in athena roles.

Interview Questions

  • What is Amazon Athena and how does it differ from traditional databases? (medium)
  • Can you explain how partitioning works in Athena? (advanced)
  • How do you optimize queries in Athena for better performance? (medium)
  • What are the best practices for managing data in Athena? (basic)
  • Have you worked with complex joins in Athena? Can you provide an example? (medium)
  • What is the difference between Amazon Redshift and Amazon Athena? (advanced)
  • How do you handle errors and exceptions in Athena queries? (medium)
  • Have you used User Defined Functions (UDFs) in Athena? If yes, explain a scenario where you implemented them. (advanced)
  • How do you schedule queries in Athena for automated execution? (medium)
  • Can you explain the different data types supported by Athena? (basic)
  • What security measures do you implement to protect sensitive data in Athena? (medium)
  • Have you worked with nested data structures in Athena? If yes, share your experience. (advanced)
  • How do you troubleshoot performance issues in Athena queries? (medium)
  • What is the significance of query caching in Athena and how does it work? (medium)
  • Can you explain the concept of query federation in Athena? (advanced)
  • How do you handle large datasets in Athena efficiently? (medium)
  • Have you integrated Athena with other AWS services? If yes, describe the integration process. (advanced)
  • How do you monitor query performance in Athena? (medium)
  • What are the limitations of Amazon Athena? (basic)
  • Have you worked on cost optimization strategies for Athena queries? If yes, share your approach. (advanced)
  • How do you ensure data security and compliance in Athena? (medium)
  • Can you explain the difference between serverless and provisioned query execution in Athena? (medium)
  • How do you handle complex data transformation tasks in Athena? (medium)
  • Have you implemented data lake architecture using Athena? If yes, describe the process. (advanced)

Closing Remark

As you explore opportunities in the athena job market in India, remember to showcase your expertise, skills, and enthusiasm for the field during interviews. With the right preparation and confidence, you can land your dream job in this dynamic and rewarding industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies