Jobs
Interviews

23 Bigdata Frameworks Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India . Key Skill : Hadoop-Spark SparkSQL – Java Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 4 weeks ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Chennai

Work from Office

Role & responsibilities Key Skills: Bigdata Frameworks like: Hadoop, Sprak, Pyspark, Hive - Mandatory

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Pune

Work from Office

Job Title: Senior / Lead Data Engineer Company: Synechron Technologies Locations: Pune or Chennai Experience: 5 to 12 years : Synechron Technologies is seeking an accomplished Senior or Lead Data Engineer with expertise in Java and Big Data technologies. The ideal candidate will have a strong background in Java Spark, with extensive experience working with big data frameworks such as Spark, Hadoop, HBase, Couchbase, and Phoenix. You will lead the design and development of scalable data solutions, ensuring efficient data processing and deployment in a modern technology environment. Key Responsibilities: Lead the development and optimization of large-scale data pipelines using Java and Spark. Design, implement, and maintain data infrastructure leveraging Spark, Hadoop, HBase, Couchbase, and Phoenix. Collaborate with cross-functional teams to gather requirements and develop robust data solutions. Lead deployment automation and management using CI/CD tools including Jenkins, Bitbucket, GIT, Docker, and OpenShift. Ensure the performance, security, and reliability of data processing systems. Provide technical guidance to team members and participate in code reviews. Stay updated on emerging technologies and leverage best practices in data engineering. Qualifications & Skills: 5 to 14 years of experience as a Data Engineer or similar role. Strong expertise in Java programming and Apache Spark. Proven experience with Big Data technologiesSpark, Hadoop, HBase, Couchbase, and Phoenix. Hands-on experience with CI/CD toolsJenkins, Bitbucket, GIT, Docker, OpenShift. Solid understanding of data modeling, ETL workflows, and data architecture. Excellent problem-solving, communication, and leadership skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 15 Lacs

Kochi

Remote

We are seeking a highly skilled ETL/Data Engineer with expertise in Informatica DEI BDM to design and implement robust data pipelines handling medium to large-scale datasets. The role involves building efficient ETL frameworks that support batch .

Posted 1 month ago

Apply

7.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Job Title: Senior Engineer | Java and Big Data Company Name: Impetus Technologies Job Description: Impetus Technologies is seeking a skilled Senior Engineer with expertise in Java and Big Data technologies. As a Senior Engineer, you will be responsible for designing, developing, and deploying scalable data processing applications using Java and Big Data frameworks. Your role will involve collaborating with cross-functional teams to gather requirements, developing high-quality code, and optimizing data processing workflows. You will also mentor junior engineers and contribute to architectural decisions to enhance the performance and scalability of our systems. Key Responsibilities: - Design, develop, and maintain high-performance applications using Java and Big Data technologies. - Implement data ingestion and processing workflows utilizing frameworks like Hadoop and Spark. - Collaborate with the data architecture team to define data models and ensure efficient data storage and retrieval. - Optimize existing applications for performance, scalability, and reliability. - Mentor and guide junior engineers, providing technical leadership and fostering a culture of continuous improvement. - Participate in code reviews and ensure best practices for coding, testing, and documentation are followed. - Stay current with technology trends in Java and Big Data, and evaluate new tools and methodologies to enhance system capabilities. Skills and Tools Required: - Strong proficiency in Java programming language with experience in building complex applications. - Hands-on experience with Big Data technologies such as Apache Hadoop, Apache Spark, and Apache Kafka. - Understanding of distributed computing concepts and technologies. - Experience with data processing frameworks and libraries, including MapReduce and Spark SQL. - Familiarity with database systems such as HDFS, NoSQL databases (like Cassandra or MongoDB), and SQL databases. - Strong problem-solving skills and the ability to troubleshoot complex issues. - Knowledge of version control systems like Git, and familiarity with CI/CD pipelines. - Excellent communication and teamwork skills to collaborate effectively with peers and stakeholders. - A bachelor’s or master’s degree in Computer Science, Engineering, or a related field is preferred. Roles and Responsibilities About the Role: - You will be responsible for designing and developing scalable Java applications to handle Big Data processing. - Your role will involve collaborating with cross-functional teams to implement innovative solutions that align with business objectives. - You will also play a key role in ensuring code quality and performance through best practices and testing methodologies. About the Team: - You will work with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. - The team fosters a collaborative environment where knowledge sharing and continuous learning are encouraged. - Regular brainstorming sessions and technical workshops will provide opportunities to enhance your skills and stay updated with industry trends. You are Responsible for: - Developing and maintaining high-performance Java applications that process large volumes of data efficiently. - Implementing data integration and processing frameworks using Big Data technologies such as Hadoop and Spark. - Troubleshooting and optimizing existing systems to improve performance and scalability. To succeed in this role – you should have the following: - Strong proficiency in Java and experience with Big Data technologies and frameworks. - Solid understanding of data structures, algorithms, and software design principles. - Excellent problem-solving skills and the ability to work independently as well as part of a team. - Familiarity with cloud platforms and distributed computing concepts is a plus.

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 16 Lacs

Hyderabad, Bengaluru

Hybrid

Mandatory skillset 5+ years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azure's cloud platform. Proficient in Azure Data Factory, Azure Synapse Analytics and Databricks for big data processing and scaled data quality checks. Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes. Proficient in Pthon/PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation. Hands-on experience with Functional & system integration testing in big data environments, ensuring seamless data flow and accuracy across multiple systems. Knowledge and ability to design and execute test cases in a behaviour-driven development environment

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Years : 3+ Notice Period: Immediate Joiners Job description We are seeking a highly skilled and experienced Informatica Developer to join our team. The ideal candidate will have a strong background in data integration, ETL processes, and data warehousing, with at least 3 years of hands-on experience in Informatica development. Key Responsibilities: Design and Development: Develop, implement, and maintain ETL processes using Informatica PowerCenter and other Informatica tools. Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Tuning: Optimize ETL processes for performance and scalability. Collaboration: Work closely with business analysts, data architects, and other stakeholders to understand data requirements and deliver solutions. Documentation: Create and maintain technical documentation for ETL processes and data flows. Support and Maintenance: Provide ongoing support and maintenance for ETL processes, including troubleshooting and resolving issues. Mentorship: Mentor junior developers and provide technical guidance to the team. Technical Skills: Proficiency in Informatica PowerCenter, Informatica Cloud, and other Informatica/ETL tools. Strong SQL ,impala, hive and PL/SQL skills. Experience with data warehousing concepts and BI tools. Knowledge of Unix/Linux Knowledge of Python Big Data Frameworks: Proficiency in Sqoop, Spark, Hadoop, Hive, and Impala Programming: Strong coding skills in Python (including PySpark) , Airflow Location : - Remote

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. F2F Drive on 28-Jun-25 at Pune & Mumbai!! Key Skill : Hadoop-Spark SparkSQL Scala Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them lets connect the right talent with right opportunity DM or email to know more Lets build something great together

Posted 1 month ago

Apply

2.0 - 6.0 years

6 - 8 Lacs

Kolkata

Work from Office

Location Taratola, Kolkata Experience in. Mash-Up & Services Dev, Java Web Services, SQL, My SQL, Oracle, React, UI & Backend, Big Data storages, Spring Boot, JavaScript, Python optional, Required Candidate profile Experience in Mash-Up & Services Dev, React, My SQL, Oracle, Java Web Services, SQL, UI & Backend, Big Data storages, Spring Boot, JavaScript

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/JhYtz7Vzbn Job Description Key Skill : Cloudera, Spark,Hive,Scoop Jobs Mandatory Skills: Cloudera Administration - Hadoop, HIVE, IMPALA, SPARK, SQOOP. Maintaining/Creating JOBS and Migration, CI?CD Pipelines Monitoring and Performance Tuning. Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them lets connect the right talent with right opportunity DM or email to know more Lets build something great together

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad

Work from Office

We are seeking a skilled Data Engineer with extensive experience in the Cloudera Data Platform (CDP) to join our dynamic team. The ideal candidate will have over four years of experience in designing, developing, and managing data pipelines, and will be proficient in big data technologies. This role requires a deep understanding of data engineering best practices and a passion for optimizing data flow and collection across a diverse range of sources. Required Skills and Qualifications: Experience: 4+ years of experience in data engineering, with a strong focus on big data technologies. Cloudera Expertise: Proficient in Cloudera Data Platform (CDP) and its ecosystem, including Hadoop, Spark, HDFS, Hive, Impala, and other relevant tools. Programming Languages: Strong programming skills in Python, Scala, or Java. ETL Tools: Experience with ETL tools and processes. Data Warehousing: Knowledge of data warehousing concepts and experience with data modeling. SQL: Advanced SQL skills for querying and manipulating large datasets. Linux/Unix: Proficiency in Linux/Unix shell scripting. Version Control: Familiarity with version control systems like Git. Problem-Solving: Strong analytical and problem-solving skills. Communication: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Preferred Qualifications: Cloud Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Data Streaming: Experience with real-time data streaming technologies like Kafka. DevOps: Familiarity with DevOps practices and tools such as Docker, Kubernetes, and CI/CD pipelines. Education: Bachelor’s degree in computer science, Information Technology, or a related field. Main Skill: Hadoop, Spark,Hive,Impala,Scala,Python,Java,Linux Roles and Responsibilities Develop and maintain scalable data pipelines using Cloudera Data Platform (CDP) components. Design and implement ETL processes to extract, transform, and load data from various data sources into the data lake or data warehouse. Optimize and troubleshoot data workflows for performance and efficiency. Manage and administer Hadoop clusters within the Cloudera environment. Monitor and ensure the health and performance of the Cloudera platform. Implement data security best practices, including encryption, data masking, and user access control. Work closely with data scientists, analysts, and other stakeholders to understand data requirements and provide the necessary support. Collaborate with cross-functional teams to design and deploy big data solutions that meet business needs. Participate in code reviews, provide feedback, and contribute to team knowledge sharing. Create and maintain comprehensive documentation of data engineering processes, data architecture, and system configurations. Provide support for production data pipelines, including troubleshooting and resolving issues as they arise. Train and mentor junior data engineers, fostering a culture of continuous learning and improvement. Stay up to date with the latest industry trends and technologies related to data engineering and big data. Propose and implement improvements to existing data pipelines and architectures. Explore and integrate new tools and technologies to enhance the capabilities of the data engineering team.

Posted 1 month ago

Apply

4.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Reference 25000AXZ Responsibilities Understand user expectations and develop technical solutions and raises clarifications with stakeholders, Respect all development practices and guidelines, Ensure principles of security and architecture is maintained Work effectively with other team members by sharing best practices, Required Profile required Technical Skills (6 to 8 years of exp): Sacala Expert level BigData Framework Advanced Rest services Advanced Material design Expert SQL Advanced UX Added Advantage Experience in Agile methodology, JIRA Added advantage ReactJS Added advantage PL/SQL Added Advantage Why join us ?We are committed to creating a diverse environment and are proud to be an equal opportunity employer All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status?, Business insight At SocitGnrale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious, Whether youre joining us for a period of months, years or your entire career, together we can have a positive impact on the future Creating, daring, innovating and taking action are part of our DNA, If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us! Still hesitating You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices and sharing their skills with charities There are many ways to get involved, We are committed to support accelerating our Groups ESG strategy by implementing ESG principles in all our activities and policies They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection, Diversity and Inclusion We are an equal opportunities employer and we are proud to make diversity a strength for our company Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination,

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Spark +Python Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Key Skill: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Hadoop-Spark SparkSQL Scala Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 1 month ago

Apply

3.0 - 8.0 years

7 - 17 Lacs

Bengaluru

Work from Office

Primary Skill : #Snowflake , #Cloud ( #AWS , #GCP ), #SCALA , #Python , #Spark , #BigData and #SQL . RESPONSIBILITY :- Strong development experience in #Snowflake , #Cloud ( #AWS , #GCP ), #SCALA , #Python , #Spark , #BigData and #SQL . Work closely with stakeholders, including product managers and designers, to align technical solutions with business goals. Maintain code quality through reviews and make architectural decisions that impact scalability and performance. Performs Root cause Analysis for any critical defects and address technical challenges, optimize workflows, and resolve issues efficiently. Expert in #Agile , #WaterfallProgram / #ProjectImplementation . Manages strategic and tactical relationships with program stakeholders. Successfully executing projects within strict deadlines while managing intense pressure. Good understanding of #SDLC ( #SoftwareDevelopmentLifeCycle ) Identify potential technical risks and implement mitigation strategies Excellent verbal, written, and interpersonal communication abilities, coupled with strong problem-solving, facilitation, and analytical skills. Cloud Management Activities To have a good understanding of the cloud architecture /containerization and application management on #AWS and #Kubernetes , to have in depth understanding of the #CICD Pipelines and review tools like #Jenkins , #Bamboo / #DevOps . Skilled at adapting to evolving work conditions and fast-paced challenges.

Posted 1 month ago

Apply

4.0 - 6.0 years

12 - 16 Lacs

Chennai

Work from Office

We are seeking a skilled Data Engineer who can function as a Data Architect, designing scalable data pipelines, table structures, and ETL workflows. The ideal candidate will be responsible for recommending cost-effective and high-performance data architecture solutions, collaborating with cross-functional teams to enable efficient analytics and data science initiatives. Key Responsibilities: Design and implement ETL workflows, data pipelines, and table structures to support business analytics and data science. Optimize data storage, retrieval, and processing for cost-efficiency and high performance. Collaborate with Analytics and Data Science teams for feature engineering and KPI computations. Develop and maintain data models for structured and unstructured data. Ensure data quality, integrity, and security across systems. Work with cloud platforms (AWS/ Azure/ GCP) to design and manage scalable data architectures. Technical Skills Required: SQL & Python Strong proficiency in writing optimized queries and scripts. PySpark Hands-on experience with distributed data processing. Cloud Technologies (AWS/ Azure/ GCP) Experience with cloud-based data solutions. Spark & Airflow Experience with big data frameworks and workflow orchestration. Gen AI (Preferred) Exposure to generative AI applications is a plus. Preferred Qualifications: Experience in data modeling, ETL optimization, and performance tuning. Strong problem-solving skills and ability to work in a fast-paced environment. Prior experience working with large-scale data processing.

Posted 1 month ago

Apply

4.0 - 6.0 years

12 - 16 Lacs

Chennai

Work from Office

Job Information Job Opening ID ZR_2441_JOB Date Opened 21/03/2025 Industry IT Services Job Type Work Experience 4-6 years Job Title Data engineer with Gen Ai City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 1 We are seeking a skilled Data Engineer who can function as a Data Architect, designing scalable data pipelines, table structures, and ETL workflows. The ideal candidate will be responsible for recommending cost-effective and high-performance data architecture solutions, collaborating with cross-functional teams to enable efficient analytics and data science initiatives. Key Responsibilities: Design and implement ETL workflows, data pipelines, and table structures to support business analytics and data science. Optimize data storage, retrieval, and processing for cost-efficiency and high performance. Collaborate with Analytics and Data Science teams for feature engineering and KPI computations. Develop and maintain data models for structured and unstructured data. Ensure data quality, integrity, and security across systems. Work with cloud platforms (AWS/ Azure/ GCP) to design and manage scalable data architectures. Technical Skills Required: SQL & Python Strong proficiency in writing optimized queries and scripts. PySpark Hands-on experience with distributed data processing. Cloud Technologies (AWS/ Azure/ GCP) Experience with cloud-based data solutions. Spark & Airflow Experience with big data frameworks and workflow orchestration. Gen AI (Preferred) Exposure to generative AI applications is a plus. Preferred Qualifications: Experience in data modeling, ETL optimization, and performance tuning. Strong problem-solving skills and ability to work in a fast-paced environment. Prior experience working with large-scale data processing. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai

Work from Office

Greetings from LTIMindtree!! We are Hiring Bigdata Professionals!! Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Experience : 3 to 8yrs Key Skill : Spark+Python and Spark+Java and Spark + Scala Face to Face Location : Pune, Chennai JD 1: Mandatory Skills: Hadoop-Spark SparkSQL Java 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio JD 2: Mandatory Skills: Hadoop-Spark SparkSQL Python Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues JD 3: Mandatory Skills: Hadoop-Spark SparkSQL Scala Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! We are Hiring Bigdata Professionals!! Experience : 3 to 8yrs Key Skill : Spark+Python and Spark+Java and Spark + Scala Face to Face Location : Pune, Chennai Interested Candidate kindly share your resume and apply in below link https://forms.office.com/r/zQucNTxa2U JD 1: Hadoop-Spark SparkSQL Java Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio JD 2: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues JD 3: Hadoop-Spark SparkSQL Scala Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 19 Lacs

Pune, Chennai, Bengaluru

Hybrid

\Greetings from LTIMindtree!! We are Hiring Bigdata Professionals!! Interested Candidate kindly share your resume and apply in below link https://forms.office.com/r/zQucNTxa2U Experience : 5 to 8yrs Key Skill : Spark+Python and Spark+Java and Spark + Scala Location : Pune, Chennai JD 1: Hadoop-Spark SparkSQL Java Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio JD 2: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues JD 3: Hadoop-Spark SparkSQL Scala Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills

Posted 2 months ago

Apply

1.0 - 4.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: We are looking for a savvy Data Engineer expert to join our analytics team. The Candidate will be responsible for expanding and optimizing our data and data pipelines, as well as optimizing data flow and collection for cross functional teams. The ideal candidate has python development experience and is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. We believe that candidate with solid Software Engineering/Development is a great fit. However, we also recognize that each candidate has a unique blend of skills. The Data Engineer will work with database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams. The right candidate will be excited by the prospect of optimizing data to support our next generation of products and data initiatives.Responsibilities for Data Engineer Create and maintain optimal data pipelines, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing for greater scalability, etc. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Performing ad hoc analysis and report QA testing. Follow Agile/SCRUM development methodologies within Analytics projects. Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Good communication skills, a great team player and someone who has the hunger to learn newer ways of problem solving. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge on Unix or Shell scripting Constructing methods to test user acceptance and usage of data. Knowledge of predictive analytics tools and problem solving using statistical methods is a plus. Experience supporting and working with cross-functional teams in a dynamic environment. Demonstrated understanding of the Software Development Life Cycle Ability to work independently and with a team in a diverse, fast paced, and collaborative environment Excellent written and verbal communication skills A quick learner with the ability to handle development tasks with minimum or no supervision Ability to multitask We are looking for a candidate with 7+ years of experience in a Data Engineering role. They should also have experience using the following software/tools Experience in Python, Java, etc. Experience with Google Cloud Platform. Experience with bigdata frameworks & tools - Apache Hadoop/Beam/Spark/Kafka. Exposure to workflow management & scheduling using Airflow/Prefect/Dagster Exposure to databases like (Big Query , Clickhouse). Experience to container orchestration (Kubernetes) Optional Experience on one or more BI tools (Tableau, Splunk or equivalent).. Minimum Qualifications:6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms.'Bachelor's degree and 7+ years Data Engineer/ Software Engineer (Data) Experience Minimum Qualifications: 4+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms. Bachelors / Masters or equivalent degree in computer engineering or in equivalent stream Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 months ago

Apply

2 - 5 years

3 - 5 Lacs

Hyderabad

Work from Office

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any degree and 2-5 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 months ago

Apply

4 - 9 years

12 - 22 Lacs

Kochi, Bengaluru

Hybrid

Job Title / Primary Skill: Sr. Big Data Developer Years of Experience: 4 to 8 years Job Location: Bangalore/Kochi (Hybrid) Must Have Skills: BDF with Hadoop, Spark, Scala, and SQL Educational Qualification: BE/BTech/ MTech/ MCA Experience: • Minimum 2+ years of experience in Big Data development. • Good understanding of SDLC. • Experience with Agile or iterative development methodologies is a plus. • Prior experience in Healthcare Analytics domain is a plus. Required Skills: Strong experience with big data technologies and associated tools such as Hadoop, Unix, HDFS, Hive, Impala, etc. Proficient in using Spark/Scala Experience with data Import/Export using Sqoop or similar tools Experience using Airflow, Jenkins or similar other automation tools Excellent knowledge of SQL Server and database structures Demonstrate ability to write and optimize T-SQL queries and stored procedures Experience working with Jira/Confluence/GitLab Excellent organizational skills and ability to handle multiple activities with changing priorities simultaneously Professional Attributes: Should have good communication skill. Team player willing to collaborate throughout all phases of development, testing and deployment. Ability to solve problems and meet the deadlines within minimal supervision.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies