Home
Jobs

390 Glue Jobs - Page 13

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4 - 9 years

25 - 30 Lacs

Chennai, Delhi NCR, Bengaluru

Work from Office

Naukri logo

We're seeking an experienced Data Engineer to join our team on a contract basis. The ideal candidate will design, develop, and maintain data pipelines and architectures using Fivetran, Airflow, AWS, and other technologies. Responsibilities: Design, build, and maintain data pipelines using Fivetran, Airflow, and AWS Glue Develop and optimize data warehousing solutions using Amazon Redshift Implement data transformation and loading processes using AWS Athena and SQL Ensure data quality, security, and compliance Collaborate with cross-functional teams to integrate data solutions Troubleshoot and optimize data pipeline performance Implement data governance and monitoring using AWS services Requirements: 7-10 years of experience in data engineering Strong expertise in: Fivetran Airflow DB2 AWS (Glue, Athena, Redshift, Lambda) Python SQL Experience with data warehousing, ETL, and data pipeline design Strong problem-solving and analytical skills Excellent communication and collaboration skills Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 months ago

Apply

10 - 14 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

Client expectation apart from JD Longer AWS data engineering experience (glue, spark, ECR ECS docker), python, pyspark, hudi/iceberg/Terraform, Kafka. Java in early career would be a great addition but not a prio. (for the OOP part and java connectors).

Posted 3 months ago

Apply

4 - 9 years

6 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description Responsibilities: - Maintain ETL pipelines using SQL, Spark, AWS Glue, and Redshift. - Optimize existing pipelines for performance and reliability. - Troubleshoot and resolve UAT / PROD pipeline issues, ensuring minimal downtime. - Implement data quality checks and monitoring to ensure data accuracy. - Collaborate with other teams, and other stakeholders to understand data requirements. Roles & Responsibilities Required Skills and Experience: - Bachelor's degree in Computer Science, Engineering, or a related field. - Extensive experience (e.g., 6+ years) in designing, developing, and maintaining ETL pipelines. - Strong proficiency in SQL and experience with relational databases (e.g., Redshift, PostgreSQL). - Hands-on experience with Apache Spark and distributed computing frameworks. - Solid understanding of AWS Glue and other AWS data services. - Experience with data warehousing concepts and best practices. - Excellent problem-solving and troubleshooting skills. - Strong communication and collaboration1 skills. - Experience with version control systems (e.g., Git). - Experience with workflow orchestration tools (e.g., Airflow). Preferred Skills and Experience: - Experience with other cloud platforms (e.g., Azure, GCP). - Knowledge of data modeling and data architecture principles. - Experience with data visualization tools (e.g., Tableau, Power BI). - Familiarity with Agile development methodologies

Posted 3 months ago

Apply

4 - 8 years

5 - 15 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Experience: Data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Proficiency in Databricks platform, management, and optimization. Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS. Proven experience in data engineering performance tuning and analytical understanding in business and program contexts. Solid experience in Python development, specifically in pySpark within the AWS Cloud environment, including experience with Terraform. Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying. Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration. Understanding of continuous deployment (CI/CD) processes. Experience with Airflow and additional Apache Spark knowledge is advantageous. Exposure to ETL tools, including Informatica. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development. Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment. Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem. Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters. Engage with and support business teams to deliver data and analytics projects effectively. Manage source control systems and utilize Jenkins for continuous integration. Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks. PAN India

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Mumbai

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Mumbai

Hybrid

Naukri logo

Role & responsibilities : Proven experience as a Python Developer Experience with AWS services Design and implement robust and scalable applications using Python. Develop, test, and debug Python applications and scripts. Integrate user-facing elements with server-side logic. Implement security and data protection measures.

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Bangalore Rural

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

7 - 10 years

0 - 3 Lacs

Hyderabad

Hybrid

Naukri logo

Role & responsibilities 1. Senior Data Analyst with 7+ years of Experience. 2. AWS Data Stack design and development Experience. 3. Experience in implementation of Data pipelines and Products Location: Bangalore , Pune, Chennai

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Title: Principal Data Engineer (Associate Director) Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process. These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security Performance Optimization: Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving: Leadership experience in problem-solving and technical decision-making. Communication: Strong in strategic communication and stakeholder engagement. Project Management: Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities : Proven experience as a Python Developer Experience with AWS services Design and implement robust and scalable applications using Python. Develop, test, and debug Python applications and scripts. Integrate user-facing elements with server-side logic. Implement security and data protection measures.

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Hybrid

Naukri logo

Role & responsibilities : Proven experience as a Python Developer Experience with AWS services Design and implement robust and scalable applications using Python. Develop, test, and debug Python applications and scripts. Integrate user-facing elements with server-side logic. Implement security and data protection measures.

Posted 3 months ago

Apply

4 - 8 years

6 - 15 Lacs

Bangalore Rural

Hybrid

Naukri logo

Immediate Opening!! Skills: AWS, Lambda, Spark, Pyspark,Hive,Python Exp: 4+ years Location: Bangalore, Hyderabad, Pune Notice Period: Immediate-30 days Mode: contract

Posted 3 months ago

Apply

4 - 8 years

6 - 15 Lacs

Pune

Hybrid

Naukri logo

Immediate Opening!! Skills: AWS, Lambda, Spark, Pyspark,Hive,Python Exp: 4+ years Location: Bangalore, Hyderabad, Pune Notice Period: Immediate-30 days Mode: contract

Posted 3 months ago

Apply

4 - 8 years

6 - 15 Lacs

Bengaluru

Hybrid

Naukri logo

Immediate Opening!! Skills: AWS, Lambda, Spark, Pyspark,Hive,Python Exp: 4+ years Location: Bangalore, Hyderabad, Pune Notice Period: Immediate-30 days Mode: contract

Posted 3 months ago

Apply

4 - 8 years

6 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

Immediate Opening!! Skills: AWS, Lambda, Spark, Pyspark,Hive,Python Exp: 4+ years Location: Bangalore, Hyderabad, Pune Notice Period: Immediate-30 days Mode: contract

Posted 3 months ago

Apply

6 - 11 years

10 - 12 Lacs

Pune

Work from Office

Naukri logo

We are looking for highly skilled Data Engineers to join our team for a long-term offshore position. The ideal candidates will have 5+ years of experience in Data Engineering, with a strong focus on Python and programming. The role requires proficiency in leveraging AWS services to build efficient, cost-effective datasets that support Business Reporting and AI/ML Exploration. Candidates must demonstrate the ability to functionally understand the Client Requirements and deliver Optimized Datasets for multiple Downstream Applications. The selected individuals will work under the guidance of an Lead from Onsite and closely with Client Stakeholders to meet business objectives. Key Responsibilities Cloud Infrastructure: Design and implement scalable, cost-effective data pipelines on the AWS platform using services like S3, Athena, Glue, RDS, etc. Manage and optimize data storage strategies for efficient retrieval and integration with other applications. Support the ingestion and transformation of large datasets for reporting and analytics. Tooling and Automation: Develop and maintain automation scripts using Python to streamline data processing workflows. Integrate tools and frameworks like PySpark to optimize performance and resource utilization. Implement monitoring and error-handling mechanisms to ensure reliability and scalability. Collaboration and Communication: Work closely with the onsite lead and client teams to gather and understand functional requirements. Collaborate with business stakeholders and the Data Science team to provide datasets suitable for reporting and AI/ML exploration. Document processes, provide regular updates, and ensure transparency in deliverables. Data Analysis and Reporting: Optimize AWS service utilization to maintain cost-efficiency while meeting performance requirements. Provide insights on data usage trends and support the development of reporting dashboards for cloud costs. Security and Compliance: Ensure secure handling of sensitive data with encryption (e.g., AES-256, TLS) and role-based access control using AWS IAM. Maintain compliance with organizational and industry regulations. Required Skills: 5+ years of experience in Data Engineering with a strong emphasis on AWS platforms. Hands-on expertise with AWS services such as S3, Glue, Athena, RDS, etc. Proficiency in Python and for building Data Pipelines for ingesting data and integrating it across applications. Demonstrated ability to design and develop scalable Data Pipelines and Workflows. Strong problem-solving skills and the ability to troubleshoot complex data issues. Preferred Skills: Experience with Big Data technologies, including Spark, Kafka, and Scala, for Distributed Data processing. Hands-on expertise in working with AWS Big Data services such as EMR, DynamoDB, Athena, Glue, and MSK (Managed Streaming for Kafka). Familiarity with on-premises Big Data platforms and tools for Data Processing and Streaming. Proficiency in scheduling data workflows using Apache Airflow or similar orchestration tools like One Automation, Control-M, etc. Strong understanding of DevOps practices, including CI/CD pipelines and Automation Tools. Prior experience in the Telecommunications Domain, with a focus on large-scale data systems and workflows. AWS certifications (e.g., Solutions Architect, Data Analytics Specialty) are a plus.

Posted 3 months ago

Apply

8 - 13 years

12 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Strong AWS(Glue,Lambda, SQS, SNS) and ETL development experience of >5 years Good experience in end-to-end implementation of AWS Development projects, especially Glue and Lambda Services Good experience in ETL tool preferably Matillion Knowledge on DBT tool and Snowflake is must Very strong in English communication -written and verbal Strong SQL Skills Good Understanding of ServiceNow and JIRA Ticketing tools Work Experience : - 8+ Years

Posted 3 months ago

Apply

8 - 13 years

12 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Strong AWS(Glue,Lambda, SQS, SNS) and ETL development experience of >5 years Good experience in end-to-end implementation of AWS Development projects, especially Glue and Lambda Services Good experience in ETL tool preferably Matillion Knowledge on DBT tool and Snowflake is must Very strong in English communication -written and verbal Strong SQL Skills Good Understanding of ServiceNow and JIRA Ticketing tools Work Experience : - 8+ Years

Posted 3 months ago

Apply

1 - 6 years

3 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: We are looking for a talented, motivated leader with experience in building Scalable Cloud Services, Infrastructure, and processes. As part of the IoT (Internet of Things) team you will be working on the next generation of IoT products. As a Business Intelligence Engineer (BIE) In This Role The ideal candidate will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. You will have deep expertise in gathering requirements and insights, mining large and diverse data sets, data visualization, writing complex SQL queries, building rapid prototype using Python/ R and generating insights that enable senior leaders to make critical business decisions.Key job responsibilities You will utilize your deep expertise in business analysis, metrics, reporting, and analytic tools/languages like SQL, Excel, and others, to translate data into meaningful insights through collaboration with scientists, software engineers, data engineers and business analysts. You will have end-to-end ownership of operational, financial, and technical aspects of the insights you are building for the business, and will play an integral role in strategic decision-making. Conduct deep dive analyses of business problems and formulate conclusions and recommendations to be presented to senior leadership Produce recommendations and insights that will help shape effective metric development and reporting for key stakeholders Simplify and automate reporting, audits and other data-driven activities Partner with Engineering teams to enhance data infrastructure, data availability, and broad access to customer insights To develop and drive best practices in reporting and analysis:data integrity, test design, analysis, validation, and documentation Learn new technology and techniques to meaningfully support product and process innovation BASIC QUALIFICATIONS At least 1+ years of experience using SQL to query data from databases/data warehouses/cloud data sources/etc. (e.g., Redshift, MySQL, PostgreSQL, MS SQL Server, BigQuery, etc.). Experience with data visualization using Tableau, Power BI, Quicksight, or similar tools. Bachelors degree in Statistics, Economics, Math, Finance, Engineering, Computer Science, Information Systems, or a related quantitative field. Ability to operate successfully and independently in a fast-paced environment. Comfort with ambiguity and eagerness to learn new skills. Knowledge of Cloud Services AWS, GCP and/or Azure is a must PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, Athena, Glue, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience with creating and building predictive/optimization tools that benefit the business and improve customer experience Experience articulating business questions and using quantitative techniques to drive insights for business. Experience in dealing with technical and non-technical senior level managers. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants :Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies :Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Preferred Qualifications 3+ years of experience as a ML Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid LLM experience Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools, platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Understanding of Machine Learning; Prior experience in ML Engineering a must Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred Qualifications:Masters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 6+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 5+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. Applicants :Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies :Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 months ago

Apply

2 - 7 years

4 - 9 Lacs

Mumbai

Work from Office

Naukri logo

- Project hands-on experience in AWS cloud services - Good knowledge of SQL and experience in working with databases like Oracle, MS SQL etc - Experience with AWS services such as S3, RDS, EMR, Redshift, Glue, Sagemaker, Dynamodb, Lambda,

Posted 3 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 3 months ago

Apply

12 - 20 years

27 - 42 Lacs

Trivandrum, Bengaluru, Hyderabad

Work from Office

Naukri logo

Hiring For AWS Big Data Architect who can Join Immediately with one of our client. Role : Big Data Architect / AWS Big Data Architect Experience : 12+ Years Locations : Hyderabad , Bangalore , Gurugram, Kochi , Trivandrum Notice Period : Immediate Joiners Shift Timings : overlap with UK timings ( 2-11 PM IST) Notice Period : Immediate Joiners / Serving Notice with in 30 Days Required Skills & Qualifications : 12+ years of experience in Big Data architecture and engineering. Strong expertise in AWS (DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc.). Hands-on experience with Debezium and Kafka for real-time data streaming and synchronization. Proficiency in Spark optimization for batch processing improvements. Strong SQL and Oracle query optimization experience. Expertise in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena, etc.). Experience in CI/CD automation and integrating AWS services with DevOps pipelines. Strong problem-solving skills and ability to work in an Agile environment. Preferred Skills (Good to Have): • Experience with Dremio to Athena migrations. • Exposure to cloud-native DR solutions on AWS. • Strong analytical skills to document and implement performance improvements More details to Contact to me : 9000336401 Mail ID :chandana.n@kksoftwareassociates.com For More Job Alerts Please Do Follow : https://lnkd.in/gHMuPUXW

Posted 3 months ago

Apply

Exploring Glue Jobs in India

In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai

Average Salary Range

The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.

Career Path

In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)

Interview Questions

Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)

Closing Remark

As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies