Home
Jobs

266 Athena Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

13 - 20 years

25 - 40 Lacs

Bengaluru, Hyderabad, Gurgaon

Work from Office

Naukri logo

Role & responsibilities We are seeking a highly skilled Big Data Architect with deep expertise in AWS, Kafka, Debezium, and Spark. This role offers an exciting opportunity to be a critical player in optimizing query performance, data synchronization, disaster recovery (DR) solutions, and simplifying reporting workflows. The ideal candidate will have hands-on experience with a broad range of AWS-native services, big data processing frameworks, and CI/CD integrations to drive impactful system and performance enhancements. Required Skills & Qualifications: 12+ years of experience in Big Data architecture and engineering, with a proven track record of successful large-scale data solutions. Extensive expertise in AWS services such as DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc. Hands-on experience with Debezium and Kafka for real-time data streaming, change data capture (CDC), and ensuring seamless data synchronization across systems. Expertise in Spark optimization, particularly for batch processing improvements, including reducing job execution times and resource utilization. Strong SQL and Oracle query optimization skills, with a deep understanding of database performance tuning. Experience with Big Data frameworks like Hadoop, Spark, Hive, Presto, and Athena. Proven background in CI/CD automation and integrating AWS services with DevOps pipelines. Exceptional problem-solving abilities and the capacity to work effectively in an Agile environment. Skills Data Architecture, AWS, Spark, SQL Interested candidates, please share your updated resumes to saideep.p@kksoftwareassociates.com or contact 9390510069.

Posted 3 months ago

Apply

2 - 7 years

3 - 6 Lacs

Mumbai

Work from Office

Naukri logo

1. Good Knowledge of S3. Manager 2. DataLake Concepts and Performance Optimizations in DataLake 3. DataWareHouse Concepts and Amazon Redshift. 4. Athena and Redshift Spectrum . 5. Strong Understanding of Glue Concepts , Glue Data catalogue . Experienced in Implementing END to END ETL solutions using AWS GLUE with variety of source and Target systems. 6 . Must be very strong in Pyspark. Must be able to implement all Standard and Complex ETL Transformations using Pyspark. Should be able to perform various performance Optimization techniques using Spark and Spark SQL. 7. Good knowledge of SQL is a MUST. SHould be able to implement all Standard Data Transformations using SQL. Also should be able to analyze data stored in Redshift datarwarehouse and Datalakes. 8. Must have good understanding of Athena and Redshift Spectrum. 9. Understanding of RDS 10. Understanding of DataBase Migration Service and experience in Migrating from Diverse databases. 11. Understanding of writing Lambda functions and Layers for connecting to various services.Sr. Associate 12. Understanding of Cloud Watch , Cliud Watch Events , Event bridge and also some orchestration tools in AWS like Step Functions and Apache Airflow.

Posted 3 months ago

Apply

4 - 6 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing.

Posted 3 months ago

Apply

12 - 17 years

35 - 60 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

At ZoomInfo, we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. We value your take charge, take initiative, get stuff done attitude and will help you unlock your growth potential. One great choice can change everything. Thrive with us at ZoomInfo. Zoominfo is a rapidly growing data-driven company, and as such- we understand the importance of a comprehensive and solid data solution to support decision making in our organization. Our vision is to have a consistent, democratized, and accessible single source of truth for all company data analytics and reporting. Our goal is to improve decision-making processes by having the right information available when it is needed. As a Principal Software Engineer in our Data Platform infrastructure team you'll have a key role in building and designing the strategy of our Enterprise Data Engineering group. What You'll do: Design and build a highly scalable data platform to support data pipelines for diversified and complex data flows. Track and identify relevant new technologies in the market and push their implementation into our pipelines through research and POC activities. Deliver scalable, reliable and reusable data solutions. Leading, building and continuously improving our data gathering, modeling, reporting capabilities and self-service data platforms. Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs. Develop processes and tools to monitor, analyze, maintain and improve data operation, performance and usability. What you bring: Relevant Bachelor degree or other equivalent Software Engineering background. 12+ years of experience as an infrastructure / data platform / big data software engineer. Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, Athena. IaC design and hands-on experience. Familiarity designing CI/CD pipelines with Jenkins, Github Actions, or similar tools. Experience in designing, building and maintaining enterprise systems in a big data environment on public cloud. Strong SQL abilities and hands-on experience with SQL, performing analysis and performance optimizations. Hands-on experience in Python or equivalent programming language. Experience with administering data warehouse solutions (like Bigquery/ Redshift/ Snowflake). Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation and maintenance. Experience with Airflow and DBT - advantage Experience with Kubernetes using GKE or EKS - advantage.. Experience with development practices Agile, TDD - advantage

Posted 3 months ago

Apply

7 - 11 years

15 - 20 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

JD: Sr PIM Developer Mandatory skills: Java, SQL, Mongodb, AWS, RabbitMQ or active MQ or kafka Experience: 7 -9 yrs, Location: Chennai/Bangalore/Hyderabad Shift: 2.30 pm to 11.30pm (IST) Work from office Notice Period immediate to 15 days. JD: Should have 7+ yrs of experience in development and 5+ yrs in Enterwork PIM application/ Enterprise application. Should have worked in a Dynamic environment to accommodate for requested Changes. Should be able to understand Enterworks Enable tool and Its inbound/outbound integrations. Should be able to Troubleshoot, debug and upgrade existing Applications. Should have experience working in Data Store applications. Develop, Clean, Efficient code with all Specifications/Standards. Deploy new changes to Non prod and Prod environments. Work in Collaboration with Business Analysts/testers to Enhance applications based on Market Requests/Backlogs stories. Collaborate with vendor for new product features upgrades/Enhancements. Work in Collaboration with Infrastructure teams for patching activities. Develop Serverless apps in AWS (Lambda node.js, Gateway API, Athena/Glue). Prior Experience in working with Enterwork Enable Application Immediate joining or short notice is required.

Posted 3 months ago

Apply

5 - 10 years

18 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and workflows that handle large volumes of data. You will work closely with our data science and analytics teams to ensure data is readily available for analysis and reporting. This role requires expertise in AWS services, particularly Kinesis, Glue, Lambda, Step Functions, and Redshift. Additionally, the ideal candidate should have strong SQL skills and experience with one of the following programming languages, with a preference for Node.js. Role & responsibilities Design, develop, and maintain data pipelines: Create scalable and efficient data pipelines using AWS services like Kinesis, Glue, Lambda, Step Functions, and Redshift. Data integration: Integrate data from various sources into a centralized data warehouse, ensuring data consistency, quality, and security. ETL processes: Develop and manage ETL processes using AWS Glue and other relevant tools to transform and load data into Redshift or other databases. Real-time data processing: Implement real-time data processing solutions using AWS Kinesis and Lambda to handle streaming data. Automation: Automate data workflows and processes using AWS Step Functions and other orchestration tools. Performance optimization: Optimize SQL queries and database performance for efficient data retrieval and reporting. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs. Data quality and governance: Ensure high standards of data quality and implement data governance best practices across all data pipelines. Documentation: Maintain clear and comprehensive data dictionary along with documentation of data pipelines, architectures, and processes. Preferred candidate profile 4+ years of experience in data engineering or a related field. Hands-on experience with AWS services, specifically Kinesis, Glue, Lambda, Step Functions, and Redshift. Expert-level SQL knowledge, with a proven track record of writing complex queries and optimizing database performance. Proficiency in a programming language, like python or scala or Node.js. Strong understanding of ETL processes, data warehousing concepts, and real-time data processing. Experience with data modelling, schema design, and database optimization. Ability to work with large datasets and troubleshoot data issues. Familiarity with data governance, data quality, and security best practices. Excellent problem-solving skills and attention to detail. Strong communication skills, with the ability to explain technical concepts to non-technical stakeholders. Ability to work independently and as part of a team in a fast-paced environment.

Posted 3 months ago

Apply

12 - 22 years

30 - 45 Lacs

Pune, Bengaluru, Hyderabad

Work from Office

Naukri logo

Job opportunity | Big Data Architect | MNC Required years of Experience 12+ years Job Locations Bangalore, Hyderabad, Trivandrum, Gurugram ,Kochi, Pune Shift, if any overlap with UK timings ( 2-11 PM IST) Detailed JD will use the below Required Skills &- Qualifications: 5+ years of experience in Big Data Engineering. Strong expertise in AWS services like DMS, Kinesis, Athena, Glue, Lambda, S3, EMR. Hands-on experience with Spark optimizations for performance improvements. Proficiency in SQL and Oracle query tuning for high-performance data retrieval. Experience in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena). Good understanding of Kafka/Debezium for data streaming. Exposure to CI/CD automation and AWS DevOps tools. Strong problem-solving and troubleshooting skills.

Posted 3 months ago

Apply

7 - 12 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

US shift till 11 PM IST Primary Skills data architecture for all AWS data services ETL processes to load data into the data warehouse Athena, Python code, Glue Lambda, DMS , RDS, Redshift Cloud Formation Write SQL queries to support data analysis and reporting reports and dashboards to visualize data Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Responsibility Design, implement, and maintain the data architecture for all AWS data services Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and othe

Posted 3 months ago

Apply

8 - 13 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources

Posted 3 months ago

Apply

8 - 13 years

30 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

As a Data Engineer in Data Infrastructure and Strategy group, you will play a key role in transforming the way Operations Finance teams access and analyse the data. You will work to advance 3 Year Data Infrastructure Modernisation strategy and play a key role in adopting and expanding a unified data access platform and scalable governance and observability frameworks that follow modern data architecture and cloud-first designs. Your responsibilities will include supporting and migrating data analytics use cases of a targeted customer group and new features implementations within the central platform: component design, implementation using NAWS services that follow best engineering practices, user acceptance testing, launch, adoption and post-launch support. You will work on system design and integrate new components to established architecture. You will be engaged into cross-team collaboration by building reusable design patterns and components and adopting designs adopted by others. You will contribute to buy vs “build” decision by evaluating latest product and features releases for NAWS and internal products, perform gap analysis and define feasibility of their adoption and the list of blockers. The ideal candidate possess a track record of creating efficient AWS-based data solutions; data models for both relational databases and Glue/Athena/EMR stack; developing solution documentation, project plans, user guides and other project documentation. We are looking into individual contributor inspired to become data systems architects. Track of production level deliverables leveraging GenAI is big plus. Key job responsibilities * Elevate and optimize existing solutions while driving strategic migration. Conduct thorough impact assessments to identify opportunities for transformative re-architecture or migration to central platforms. Your insights will shape the technology roadmap, ensuring we make progress towards deprecation goals while providing best customer service; * Design, review and implement data solutions that support WW Operations Finance standardisation and automation initiatives using AWS technologies and internally built tools, including Spark/EMR, Redshift, Athena, DynamoDB, Lambda, S3, Glue, Lake Formation, etc; * Support data solutions adoption by both, finance and technical teams; identify and remove adoption blockers; * Ensure the speed of delivery and high-quality: iteratively improve development process and adopt mechanisms for optimisation of the development and support; * Contribute into engineering excellence by reviewing design and code created by others; * Contribute to delivery execution, planning, operational excellence, retrospectives, problem identification, and solution proposals; * Collaborate with finance analysts, engineers, product and program managers and external teams to influence and optimize the value of the delivery in data platform; * Create technical and customer-facing documentation on the products within the platform. A day in the life You work with the Engineering, Product, BI and Operations teams to elevate existing data platforms and implement best-of-class data solutions for Operations Finance organization. You solve unstructured customer pain points with technical solutions, you are focused on users productivity when working with the data. You participate in discussions with stakeholders to provide updates on project progress, gather feedback, and align on priorities. Utilizing AWS CDK and various AWS services, you design, execute, and deploy solutions. Your broader focus is on system architecture rather than individual pipelines. You regularly review your designs with Principal Engineer, incorporate gathered insights. Conscious of your impact on customers and infrastructure, you establish efficient development and change management processes to guarantee speed, quality and scalability of delivered solution. About the team Operations Finance Standardization and Automation improves customer experience and business outcomes across Amazon Operations Finance through innovative technical solutions, standardization and automation of processes and use of modern data analytics technologies. Basic qualifications MS or BS in Computer Science, Electrical Engineering, or similar fields; Strong AWS engineering background, 3+ years of demonstrated track record designing and operating data solutions in Native AWS. The right person will be highly technical and analytical with ability to drive technical execution towards organization goals; Exceptional triaging and bug fixing skills. Ability to assess risks, implement fix without customer impact; Strong data modelling experience, 3+ years of data modeling practice is required. Expertise in designing both analytical and operational data models is a must. Candidate needs to demonstrate the working knowledge of trade-offs in data model designs and platform-specific considerations with concentration in Redshift, MySQL, EMR/Spark and Athena; Excellent knowledge of modern data architecture concepts - data lakes, data lakehouses, - as well as governance practices; Strong documentation skills, proven ability to adapt the document to the audience. The ability to communicate the information on levels ranging from executive summaries and strategy addendums to detailed design specifications is critical to the success; Excellent communication skills, both written and oral. Ability to communicate technical complexity to a wide range of stakeholders. Preferred qualifications Data Governance frameworks experience; Compliance frameworks experience, SOX preferred; Familiarity or production level experience with AI-based AWS offerings (Bedrock) is a plus.

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Chennai

Work from Office

Naukri logo

As a Production Support Engineer at Chola MS General Insurance, one should be responsible for Supporting, maintaining, documenting, expanding, and optimizing our Data Lake, Data warehouse, Data pipeline, and Data products. Required Candidate profile Should have a min 6+ yrs in Data Engg. Data Analytics platform Conduct root-cause analysis as & when needed & propose a corrective action plan Follow established set of processes while handling issues

Posted 3 months ago

Apply

6 - 8 years

15 - 20 Lacs

Chennai, Pune

Work from Office

Naukri logo

Role: Senior Cloud Data Engineer Location: Pune/Chennai Experience: 6 to 8 Years What awaits you/ Job Profile You will design, develop, and optimize large-scale data pipelines. You will take ownership of critical components of the data architecture, ensuring performance, security, and compliance. Design and implement scalable data pipelines for batch and real-time processing. Optimize data storage and computing resources to improve cost and performance. Ensure data security and compliance with industry regulations. Collaborate with data scientists, analysts, and application teams to align data storage strategies. Lead technical discussions with stakeholders to deliver the best possible solutions. Automate data workflows and develop reusable frameworks. Monitor and troubleshoot ETL pipelines, jobs, and cloud services. What should you bring along 6+ years of experience in AWS cloud services and data engineering. Strong expertise in data modeling, ETL pipeline design, and SQL query optimization. Prior experience working with streaming solutions like Kafka. Excellent knowledge of Terraform and GitHub Actions. Experienced in being in the lead of a feature team. Must have technical skill Python, PySpark, Hive, Unix AWS (S3, Lambda, Glue, Athena, RDS, Step Functions, SNS, SQS, API Gateway) MySQL, Oracle, NoSQL and experience in writing complex queries Good to have technical skills Cloud Architect Certification. Terraform Git, GitHub Actions, Jenkins Delta Lake, Iceberg Experience in AI-powered data solutions

Posted 3 months ago

Apply

4 - 9 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Hands on experience with AWS, Pyspark, SQL and AWS Services: Compute(EC2, Lambda), Storage (S3), Database, Orchestration (Apache Airflow, Step Function),ETL (Glue, EMR, Athena, Redshift), Infra, Data Migration (AWS DataSync, AWS DMS)

Posted 3 months ago

Apply

5 - 10 years

18 - 30 Lacs

Pune

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

5 - 10 years

18 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

5 - 10 years

18 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

4 - 9 years

18 - 30 Lacs

Pune

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

4 - 9 years

18 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

4 - 9 years

18 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention

Posted 3 months ago

Apply

5 - 10 years

11 - 21 Lacs

Pune

Work from Office

Naukri logo

Exciting Career Opportunities at GSPANN Technologies! Are you a seasoned professional with expertise in Big Data, Spark, SQL, Redshift, Python/PySpark, Hive, and AWS (S3, EWR) ? Experience Required: 4+ years Send your resume to: heena.ruchwani@gspann.com Join us and be a part of our dynamic team!

Posted 3 months ago

Apply

5 - 10 years

11 - 21 Lacs

Gurgaon

Work from Office

Naukri logo

Exciting Career Opportunities at GSPANN Technologies! Are you a seasoned professional with expertise in Big Data, Spark, SQL, Redshift, Python/PySpark, Hive, and AWS (S3, EWR) ? Experience Required: 4+ years Send your resume to: heena.ruchwani@gspann.com Join us and be a part of our dynamic team!

Posted 3 months ago

Apply

6 - 11 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Join Our Team at GSPANN! We are looking for a talented Big Data Engineer to join our dynamic team in Bangalore! If you have a passion for data solutions and a strong problem-solving mindset, we want to hear from you. Role: Big Data Engineer Location: Bangalore Experience: 7+ Years Skills: Hadoop, Hive, Spark, SQL, Python, AWS Key Responsibilities: Participate in all phases of the software development lifecycle. Solve complex business problems with scalable solutions. Design and implement product features in collaboration with stakeholders. Optimize data at scale for ingestion and consumption. Support new data management projects and re-structure current data architecture. Required Skills: 4+ years of experience in developing data and analytic solutions. Experience with AWS, EMR, S3, Hive & PySpark. Proficiency in SQL and Python. Knowledge of source control tools like GitHub. Experience with workflow scheduling tools like Airflow. Strong communication and analytical skills. Interested? Send your resume to heena.ruchwani@gspann.com and be a part of our innovative team! #Hiring #BigDataEngineer #TechJobs #JoinOurTeam #GSPANN

Posted 3 months ago

Apply

3 - 4 years

4 - 4 Lacs

Chennai

Work from Office

Naukri logo

Greetings from Annexmed!!! We have huge openings for AR Caller / Senior AR Caller Mode of Interview: Virtual Domain: US Healthcare - Medical Billing Shift Timing: Night Shift 6:00 PM - 3:00 AM (Sat & Sun fixed off) Job Location: Perungudi, Chennai Notice period: Immediate joiner Job Description: AR Caller / Senior AR Caller * Calling Insurance Company on behalf of Doctors / Physician for claim status. * Follow-up with Insurance Company to check status of outstanding claims. *Minimum 1 year of experience in AR calling Denial management. *Experience in Physician & Hospital Billing , Athena Benefits: 1. Salary & Appraisal - Best in Industry 2. Excellent learning platform with great opportunity to build career in Medical Billing 3. Upfront Leave Credit 4. 5 days working. 5. Medical Insurance 6. Pick up & Drop facility for both male and female Interested can reach out to 9600316324- Indumathi - HR

Posted 3 months ago

Apply

5 - 10 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills :AWS Glue Good to have skills :NA

Posted 3 months ago

Apply

5 - 7 years

5 - 9 Lacs

Nagpur

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 year(s) of experience is required Educational Qualification : Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using AWS Architecture. Your typical day will involve working with AWS services, developing and testing code, and collaborating with cross-functional teams to deliver high-quality solutions. Roles & Responsibilities:- Drive automation and integrate with CI/CD tools for continuous validation. Drive mentality of building well architected applications for AWS Cloud Drive the mentality of quality being owned by the entire team. Identify code defects and work with other developers to address quality issues in product code. Finding bottlenecks and thresholds in existing code through the use of automation tools. Articulate clear business objectives aligned to technical specifications and work in an iterative agile pattern daily. Ownership over your work task, and are comfortable interacting with all levels of the team and raise challenges when necessary. Professional & Technical Skills: Core code production for back, middle and front end applications Deploying and developing AWS cloud applications and services end to end Operational triage of bugs, failed test cases and system failures Creating and optimizing infrastructure performance metrics Mapping user stories to detailed technical specifications Complete detailed peer code reviews Architecting pilots and proofs-of-concept effort to spur innovation Working in all stages of the development lifecycle Automation of manual data object creation and test cases Ask smart questions, collaborate, team up, take risks, and champion new ideas Extensive experience with AWS or other cloud technologies including Glue, Lambda, S3, IAM, VPC, EC2, Athena, Cloudwatch, Dynamo and RDS Understanding of the serverless mindset on architectural solutioning Strong Terraform IaaS experience. Experience with DevOps & CI/CD tools Jenkins, Cloudbees, Please Build, etc. Proficiency with OOP languages such as Python, Java, Scala, but Python preferred Proficiency working with large data stores and data sets Deep understanding of database concepts and design for SQL (primarily) and NoSQL (secondarily) -- schema design, optimization, scalability, etc. Solid experience with git software version control and good understanding of code branching strategies and organization for code reuse Qualification NA

Posted 3 months ago

Apply

Exploring Athena Jobs in India

India's job market for athena professionals is thriving, with numerous opportunities available for individuals skilled in this area. From entry-level positions to senior roles, companies across various industries are actively seeking talent with expertise in athena to drive their businesses forward.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for athena professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 4-7 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.

Career Path

In the field of athena, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually reaching positions like Architect or Manager. Continuous learning and upskilling are essential to advance in this field.

Related Skills

Apart from proficiency in athena, professionals in this field are often expected to have skills such as SQL, data analysis, data visualization, AWS, and Python. Strong problem-solving abilities and attention to detail are also highly valued in athena roles.

Interview Questions

  • What is Amazon Athena and how does it differ from traditional databases? (medium)
  • Can you explain how partitioning works in Athena? (advanced)
  • How do you optimize queries in Athena for better performance? (medium)
  • What are the best practices for managing data in Athena? (basic)
  • Have you worked with complex joins in Athena? Can you provide an example? (medium)
  • What is the difference between Amazon Redshift and Amazon Athena? (advanced)
  • How do you handle errors and exceptions in Athena queries? (medium)
  • Have you used User Defined Functions (UDFs) in Athena? If yes, explain a scenario where you implemented them. (advanced)
  • How do you schedule queries in Athena for automated execution? (medium)
  • Can you explain the different data types supported by Athena? (basic)
  • What security measures do you implement to protect sensitive data in Athena? (medium)
  • Have you worked with nested data structures in Athena? If yes, share your experience. (advanced)
  • How do you troubleshoot performance issues in Athena queries? (medium)
  • What is the significance of query caching in Athena and how does it work? (medium)
  • Can you explain the concept of query federation in Athena? (advanced)
  • How do you handle large datasets in Athena efficiently? (medium)
  • Have you integrated Athena with other AWS services? If yes, describe the integration process. (advanced)
  • How do you monitor query performance in Athena? (medium)
  • What are the limitations of Amazon Athena? (basic)
  • Have you worked on cost optimization strategies for Athena queries? If yes, share your approach. (advanced)
  • How do you ensure data security and compliance in Athena? (medium)
  • Can you explain the difference between serverless and provisioned query execution in Athena? (medium)
  • How do you handle complex data transformation tasks in Athena? (medium)
  • Have you implemented data lake architecture using Athena? If yes, describe the process. (advanced)

Closing Remark

As you explore opportunities in the athena job market in India, remember to showcase your expertise, skills, and enthusiasm for the field during interviews. With the right preparation and confidence, you can land your dream job in this dynamic and rewarding industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies