Jobs
Interviews

344 Hdfs Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a data engineer at our company, you will play a crucial role in designing and implementing large-scale systems, particularly focusing on complex data pipelines. You will be responsible for driving projects from the initial stages to production, collaborating with stakeholders, analysts, and scientists to gather requirements and transform them into a data engineering roadmap. Your ability to communicate effectively, work well within a team, and showcase strong technical skills will be key in this role. Your primary responsibilities will include collaborating with various teams across different tech sites to achieve Objectives and Key Results (OKRs) that propel our company forward. You will enhance data layers to support the development of next-generation products resulting from our strategic initiatives. Additionally, you will be tasked with designing and constructing data pipelines to manage a range of tasks such as data extraction, cleansing, transformation, enrichment, and loading to meet specific business needs. To excel in this role, you should possess strong SQL proficiency, a solid understanding of Data Warehousing and Data Modelling concepts, and hands-on experience with the Hadoop tech stack, including HDFS, Hive, Oozie, Airflow, MapReduce, and Spark. Proficiency in programming languages such as Python, Java, and Scala is essential, along with experience in building ETL Data Pipelines and performance troubleshooting and tuning. Preferred qualifications for this position include familiarity with Data Warehouse (DW) or Business Intelligence (BI) tools like Anaplan, TM1, and Hyperion, as well as a track record of delivering high-quality end-to-end data solutions in an agile environment. You should be driven to optimize systems for efficiency, consistently propose and implement innovative ideas, mentor junior team members, and lead collaborative efforts with other engineers when necessary. If you are looking for a challenging role where you can leverage your data engineering skills to drive impactful projects and contribute to the growth of our organization, we encourage you to apply for this position.,

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

india

On-site

Job Description As aSenior Software Engineer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable & data-driven applications that - among other use cases - connect IoT devices, pre-process, standardize or enrich data, feed ML models or generate alerts for shopfloor operators. The data sets produced by these applications - whether data streams or data at rest - need to be highly available, reliable, consistent and quality-assured so that they can serve as input towide range of other use cases and downstream applications. We run these applications on a hybrid data platform - Azure Databricks and a Kubernetes based, edge data platform in our plants. The platform is currently in ramp-up phase, so apart from building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models. Qualifications Bachelor's degree in computer science, Computer Engineering, relevant technical field, or equivalent Master's degree preferred. Additional Information Skills 6+ years of experience in professional software engineering , with a significantportion focused on building backend and / or data-intensive applications Proficiency in Scala or another JVM-based language (and the willingness to pick up Scala quickly) Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka ecosystem, Spark, Flink, HDFS, S3) - experience with Azure Databricks is a plus Prior experience with stream processing libraries such as Kafka Streams, fs2,zio-streams or Akka/Pekko streams is a plus Hands-on experience with Docker and Kubernetesfor application deployment, scaling, and management. Excellent software engineering skills (i.e., data structures & algorithms, software design) and robustknowledge of object-oriented & functional programming principles Experience with CI/CD tools such as Jenkins orGithubActions Experience with RDBMS (e.g. Postgres) Excellent software engineering skills (i.e., data structures & algorithms, software design) Excellent problem-solving skills and a pragmatic approach to engineering. Strong communication and collaboration skills, with the ability to articulate complex technical concepts to diverse audiences.

Posted 1 week ago

Apply

4.0 - 8.0 years

14 - 16 Lacs

gurugram, bengaluru

Work from Office

Role - Big Data Tester (Immediate Joiner) Location - Gurugram/ Bengaluru Duration - Full Time Roles and Responsibilities Design, develop, and execute comprehensive test plans for big data solutions using Hadoop Distributed File System (HDFS), Hive, and SQL. Collaborate with cross-functional teams to identify requirements and ensure seamless integration of ETL processes. Develop automated testing scripts using Python or other programming languages to improve efficiency and reduce manual errors. Analyze complex data sets to identify trends, patterns, and anomalies, providing insights that inform business decisions. Participate in Agile development methodologies such as Scrum to deliver high-quality products on time. Desired Candidate Profile 4-8 years of experience in Big Data Testing Strong understanding of Healthcare Domain knowledge is an added advantage. Proficiency in writing complex queries using SQL for query optimization techniques like joins, subqueries, aggregations etc.

Posted 1 week ago

Apply

4.0 - 8.0 years

8 - 17 Lacs

gurugram, bengaluru

Work from Office

**ONLY FOR IMMEDIATE JOINERS** Position: Big Data Tester Experience: 4+ Years Location: ( Gurugram / Bangalore ) Joining: Immediate Joiner Job Summary: We are seeking an experienced Big Data Tester with a strong background in HDFS, Hive, ETL testing, and SQL. The ideal candidate will have a experience in Big Data ecosystems , ETL/SQL testing , and in the healthcare domain . You will be responsible for validating data pipelines, ensuring data quality, and verifying ETL processes in a big data environment. Key Responsibilities: Design and execute comprehensive test plans and test cases for Big Data applications and ETL processes. Validate large-scale datasets stored in HDFS using HiveQL and other query tools. Perform data validation, data comparison, and quality assurance on datasets coming from multiple sources. Conduct thorough ETL testing including data extraction, transformation, and loading into data lakes/warehouses. Write and execute complex SQL queries to validate data consistency and integrity. Collaborate with data engineers, business analysts, and other stakeholders to understand data requirements and business rules. Perform regression, integration, and system testing on big data solutions. Document defects clearly and follow through for timely resolution. Ensure compliance with data privacy and security regulations, especially relevant to healthcare data . Required Skills: Big Data Testing : 4+ years of experience working with HDFS, Hive , and other Hadoop ecosystem tools. ETL Testing : 3+ years of hands-on experience in validating ETL pipelines. SQL : 3+ years of experience in writing complex SQL queries for data verification and analysis. Healthcare Domain : 2+ years of experience working in healthcare projects, with familiarity in HIPAA compliance and healthcare data formats (e.g., HL7, EDI X12, etc.). Strong analytical and problem-solving skills. Excellent communication and documentation skills. Preferred Qualifications: Experience with tools like Apache Spark, Sqoop, Kafka is a plus. Familiarity with automation tools (e.g., Selenium, Python scripting for data testing). Knowledge of data visualization and reporting tools (Tableau, Power BI) is a bonus. Experience with cloud platforms (AWS, Azure, GCP) for Big Data is desirable. Education: Bachelors or Masters degree in Computer Science, Information Technology, or a related field.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Senior Developer at Barclays, where you will play a crucial role in supporting the successful delivery of Location Strategy projects while adhering to plan, budget, quality, and governance standards. Your primary responsibility will be to drive the evolution of our digital landscape, fostering innovation and excellence. By leveraging cutting-edge technology, you will lead the transformation of our digital offerings, ensuring unparalleled customer experiences. To excel in this role as a Senior Developer, you should possess the following experience and skills: - Solid hands-on development experience with Scala, Spark, Python, and Java. - Excellent working knowledge of Hadoop components such as HDFS, HIVE, Impala, HBase, and Data frames. - Proficiency in Jenkins builds pipeline or other CI/CD tools. - Sound understanding of Data Warehousing principles and Data Modeling. Additionally, highly valued skills may include: - Experience with AWS services like S3, Athena, DynamoDB, Lambda, and DataBricks. - Working knowledge of Jenkins, Git, and Unix. Your performance may be assessed based on critical skills essential for success in this role, including risk and controls management, change and transformation capabilities, business acumen, strategic thinking, and proficiency in digital and technology aspects. This position is based in Pune. **Purpose of the Role:** The purpose of this role is to design, develop, and enhance software solutions using various engineering methodologies to deliver business, platform, and technology capabilities for our customers and colleagues. **Accountabilities:** - Develop and deliver high-quality software solutions using industry-aligned programming languages, frameworks, and tools. Ensure that the code is scalable, maintainable, and optimized for performance. - Collaborate cross-functionally with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration with business objectives. - Engage in peer collaboration, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay updated on industry technology trends, contribute to the organization's technology communities, and foster a culture of technical excellence and growth. - Adhere to secure coding practices to mitigate vulnerabilities, protect sensitive data, and deliver secure software solutions. - Implement effective unit testing practices to ensure proper code design, readability, and reliability. **Assistant Vice President Expectations:** As an Assistant Vice President, you are expected to: - Provide consultation on complex issues, offering advice to People Leaders to resolve escalated matters. - Identify and mitigate risks, develop new policies/procedures to support the control and governance agenda. - Take ownership of risk management and control strengthening related to the work undertaken. - Engage in complex data analysis from various internal and external sources to creatively solve problems. - Communicate complex information effectively to stakeholders. - Influence or convince stakeholders to achieve desired outcomes. All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as embrace the Barclays Mindset to Empower, Challenge, and Drive guiding principles for our behavior.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer II at Media.net, you will be responsible for designing, executing, and managing large and complex distributed data systems. Your role will involve monitoring performance, optimizing existing projects, and researching and integrating Big Data tools and frameworks as required to meet business and data requirements. You will play a key part in implementing scalable solutions, creating reusable components and data tools, and collaborating with teams across the company to integrate with the data platform efficiently. The team you will be a part of ensures that every web page view is seamlessly processed through high-scale services, handling a large volume of requests across 5 million unique topics. Leveraging cutting-edge Machine Learning and AI technologies on a large Hadoop cluster, you will work with a tech stack that includes Java, Elastic Search/Solr, Kafka, Spark, Machine Learning, NLP, Deep Learning, Redis, and Big Data technologies such as Hadoop, HBase, and YARN. To excel in this role, you should have 2 to 4 years of experience in big data technologies like Apache Hadoop and relational databases (MS SQL Server/Oracle/MySQL/Postgres). Proficiency in programming languages such as Java, Python, or Scala is required, along with expertise in SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL) and Apache Spark. Hands-on knowledge of working with Data Frames, Data Sets, RDDs, Spark SQL/PySpark/Scala APIs, and deep understanding of Performance Optimizations will be essential. Additionally, you should have a good grasp of Distributed Storage (HDFS/S3), strong analytical and quantitative skills, and experience with data integration across multiple sources. Experience with Message Queues like Apache Kafka, MPP systems such as Redshift/Snowflake, and NoSQL storage like MongoDB would be considered advantageous for this role. If you are passionate about working with cutting-edge technologies, collaborating with global teams, and contributing to the growth of a leading ad tech company, we encourage you to apply for this challenging and rewarding opportunity.,

Posted 1 week ago

Apply

2.0 - 7.0 years

12 - 14 Lacs

hyderabad

Work from Office

Common Skills - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Data Engineer - Knows HR Knowledge , all other requirement from Functional Area given by UBER

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

noida

Work from Office

Experience in automation, Java with selenium, rest assured is must. Experience with HDFS, Linux Commands Experience in Testing large data set which includes large extracts , Hive to Hive etc. Good knowledge of different Testing methodologies, concepts Experience in debugging application log files. Familiarity with DevOps (CI/CD) Mandatory Competencies QA/QE - QA Automation - Automation of Rest, Web / SOAP UI Services QA/QE - QA Automation - Selenium Beh - Communication and collaboration Big Data - Big Data - HDFS

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Intermediate Programmer Analyst position at our organization involves working at an intermediate level to assist in the development and implementation of new or updated application systems and programs in collaboration with the Technology team. Your main responsibility will be to contribute to application systems analysis and programming activities. You will be expected to utilize your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will be involved in installing and supporting customer exposure systems and applying programming languages for design specifications. As an Applications Development Intermediate Programmer Analyst, you will also be responsible for analyzing applications to identify vulnerabilities and security issues, conducting testing and debugging, and serving as an advisor or coach to new or lower-level analysts. You should be able to identify problems, analyze information, and make evaluative judgments to recommend and implement solutions with a limited level of direct supervision. Furthermore, you will play a key role in resolving issues by selecting solutions based on your technical experience and guided by precedents. You will have the opportunity to exercise independence of judgment and autonomy, act as a subject matter expert to senior stakeholders and team members, and appropriately assess risk when making business decisions. To qualify for this role, you should have 4-8 years of relevant experience in the Financial Service industry, intermediate level experience in an Applications Development role, clear and concise written and verbal communication skills, problem-solving and decision-making abilities, and the capacity to work under pressure and manage deadlines or unexpected changes in expectations or requirements. A Bachelor's degree or equivalent experience is required for this position. In addition to the responsibilities outlined above, the ideal candidate should possess expertise in various technical areas, including strong JAVA programming skills, object-oriented programming, data structures, design patterns, Spark frameworks like flask and Django, Big Data technologies such as Pyspark and Hadoop ecosystem components, and REST web services. Experience in Spark performance tuning, PL SQL, SQL, Transact-SQL, data processing in different file types, UI frameworks, source code management tools like git, Agile methodology, and issue trackers like Jira is highly desirable. This job description offers a comprehensive overview of the role's responsibilities and qualifications. Please note that other job-related duties may be assigned as necessary. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review our Accessibility at Citi. For additional information, you can view Cit's EEO Policy Statement and the Know Your Rights poster.,

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

pune

Work from Office

Were Hiring: Tech Lead – Big Data Technologies Location: Pune | Work Mode: Work from Office (WFO) Company: Leading MNC | Immediate Joiners Preferred Are you a Big Data expert with 6–9 years of experience and a passion for leading high-impact projects? We’re looking for a Tech Lead skilled in: Hadoop, Spark, Hive, Kafka, Sqoop, Oozie, Flume Java/Scala, SQL/NoSQL (PostgreSQL, MongoDB) System monitoring tools (Grafana, Ganglia), scripting & automation Role involves end-to-end ownership of data pipeline architecture, code reviews, performance tuning, and leading a talented team.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

bengaluru

Work from Office

Hadoop Support Engineer Bengaluru, KA - WFO We are seeking a dedicated Hadoop Support Engineer to provide comprehensive support and on-call services for our Hadoop infrastructure. This role is pivotal in ensuring the continuous operation and stability of our Hadoop clusters, addressing incidents promptly, and supporting end-users with technical queries. The ideal candidate will possess strong Hadoop administration skills, effective troubleshooting capabilities, and excellent communication skills to interact with various stakeholders. Key Responsibilities: On-Call Support: Serve as a primary point of contact in a rotating on-call schedule, providing 24/7 support to swiftly address and resolve critical incidents affecting Hadoop operations. Incident Management and Resolution: Take ownership of incident management processes by diagnosing issues, implementing fixes, and documenting solutions in a ticketing system. Conduct post-incident reviews to identify root causes and prevent recurrence. Monitoring and Alerts: Establish and maintain robust monitoring and alerting systems using tools like Nagios, Grafana, or Prometheus to proactively detect and mitigate potential issues before they escalate. User and Developer Support: Assist end-users and developers with technical queries related to Hadoop operations, providing guidance and support to optimize their use of the system. Educate users on best practices and system capabilities. System Maintenance: Conduct routine maintenance tasks including software patching, upgrades, and configuration changes to ensure system reliability and security. Schedule maintenance activities to minimize business disruption. Qualifications: 3 years of experience in Hadoop administration and support, with a strong focus on operational stability and incident resolution. Proficiency in Hadoop ecosystem components such as HDFS, YARN, Hive, and Spark. Experience with Linux system administration and scripting (e.g., Bash, Python) Experience with configuration management tools such as Ansible

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

india

On-site

About the Role: 08 The Team: As a member of the S&P Global Market Intelligence Technology team, you will work with a group of intelligent and ambitious engineers. Our software engineers are involved in the full product life cycle, from design through release. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts, Business Analysts, and Infrastructure Teams. The Impact: The person in this position will be responsible for creating technical designs, developing high-performing web application software, and ensuring quality through a robust testing process. Software Engineers at S&P Global work with the latest technologies and are rewarded for innovation and delivery. This position is part of the team which supports the search application for the company wide product platform which provides essential information to global finance industry and beyond. What's in it for you: We are currently seeking a Junior Software Developer who enjoys working with new and emerging search technologies The person in this position will be working with highly skilled search engineers including Data Scientists and Machine Learning Engineers Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the platform search application Design, implement and test front-end solutions for the web platform application Design, implement and test web service/middleware applications leveraging the backend Solr clusters within the search application architecture Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. What We're Looking For: Basic Qualifications: 1-2 years of significant experience developing web services and/or web software development. Bachelor's degree in Computer Science, Information Systems or Engineering. Experience in object-oriented design, and design patterns Experience in application development using Python, Java, C#, ASP.NET MVC, Web API, or .Net window services. Experience in UI software development, including HTML5, JavaScript, Node.js and React js. Proficient with Agile software development lifecycle methodologies including a good understanding of Test Driven Development (TDD) and/or Behavior Driven Development (BDD) and experience in using testing frameworks (JUnit, Cucumber, etc.) Preferred Qualifications: Application development experience in Java and/or Python is a plus Preferred experience in working in search technologies: Solr, Elastic Search, Lucene, etc. Preferred experience in working with HDFS, ZooKeeper and Message queues (preferably Kafka) Preferred experience in large data transfer applications/middleware and comfortable in working with large data volumes What's In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

6.0 - 11.0 years

2 - 4 Lacs

noida, gurugram, delhi / ncr

Work from Office

Role & responsibilities Machine Learning A machine learning expert should not only be able delve into concepts such as data analytics, big data, artificial intelligence, machine learning and deep learning, etc., Also help other team member in by challenging the thought process and bringing in innovative thinking. Ensuring the execution through collaborative working and problem solving. This role will require areas to be identified for value creation that will benefit from new technology, technique or algorithm deployment. The candidate is expected to possess a strong technical and business competency with knowledge of current industry Machine Learning standards, structures, and best practices, and who is capable of identifying current industry challenges in Telecom Network space, which can be solved using data science. Machine Learning Specialist qualifications and Experience Machine Learning: 1. Python (Advance level with complex algorithms) 2. R (Advance level with complex algorithms) Visualization 1. Monitoring with Prometheus and Grafana 2. Tableau and Click View Container and Storage: 1. Kubernetes 2. Docker Working knowledge of Data Management Platforms: 1. NiFi, Kafka, HDFS, Hive 2. PostgreSQL 3. NetApp storage 4. IBM Spectrum Scale storage 5. FreeIPA role based access control (RBAC) Functional Domain Knowledge: 1. Approx 10-15 years industry experience, preferably in Telecom. Major part of this in implementation of artificial intelligence based technologies to enhance operations and processes. 2. Sound knowledge of Cisco hardware, RF, RAN, Transport, Core and Applications. 3. OR Previous working experience as a Machine Learning/ Data Scientist for more than 10 years 4. Master or greater degree in Data science, computer science, mathematics, or statistics from a nationally or internationally recognized institution. 5. Outstanding problem solving and analytical skills, Attention to detail 6. Excellent project and time management skills 7. It will be good to have an exposure of consulting in the field of data science or having record of publications 8. Fluent communication skills in English are a requirement. Roles and Responsibilities 1. The ML specialist/consultant will include the development of strategic related outlooks, roadmaps with tangible milestones and deliverables. 2. This role requires the ability to guide and mentor multidisciplinary technology teams, support development of training plans for employees within the field of Artificial Intelligence and Machine Learning. 3. Propose new strategies and develop plans to expand the field of application of Machine learning and artificial intelligence within Telecom Network space. 4. Maintain up to date on new trends and techniques for data science based applications in wireless or wireline network to ensure a resilient plan for value realization 5. Establish and manage collaborations within NSG, CMI and other units as well as external partners to accelerate the achievement of goals and maximize the impact of initiatives. 6. Design and develop machine learning algorithms in Telecom Network space wireless or wireline. 7. Discover, design, and develop analytical methods to support novel approaches of data and information processing in the Telecom Networks 8. Provide technical support for program management and business development activities including proposal writing and customer development 9. Create and monitor research programs for new proposed technologies, all while aligned with new business trends. 10. Present technical and techno-commercial acumen to support different responses to clients /Bids / RFPs.

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 15 Lacs

bengaluru

Work from Office

At Tredence AWS Data Engineer designs, builds, and maintains scalable data pipelines and infrastructure using AWS services like Glue, S3, and Redshift, focusing on data ingestion, transformation (ETL), modeling, and quality. They write SQL and Python code, orchestrate workflows with tools like Airflow, and collaborate with data scientists and analysts, aiming to ensure data is accurate, accessible, and optimized for analysis and business decision-making. Role - AWS Data Engineer Years of Experience: 2~5 years Roles and responsibilities The day-to-day work of a junior-to-mid AWS data engineer involves a mix of development, monitoring, and collaboration. Pipeline development: Designing, building, and maintaining robust and scalable ETL/ELT (Extract, Transform, Load) pipelines on AWS. Managing and optimizing cloud-based data warehouses like Amazon Redshift to support business intelligence and analytics. Implementing validation and cleansing techniques to ensure data accuracy and reliability throughout the pipeline. Managing data storage (e.g., S3), transformation services (e.g., Glue), and orchestration tools (e.g., Airflow/MWAA). Working closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver reliable data solutions. Monitoring and tuning data workflows and database queries to ensure fast and efficient processing. Creating clear and comprehensive documentation of data processes, infrastructure, and technical specifications. Required Skills Required Skills - AWS, Redshift, Lambda, Glue, S3, SQL, Python, Pyspark, Step function Looking for immediate joiners

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

navi mumbai, maharashtra, india

On-site

Overview: We are seeking a highly motivated and detail-oriented individual to join our team as a Big Data Developer. This role requires a dynamic professional who can adapt to evolving business needs and drive value through their expertise. Key Responsibilities: Provide support and expertise in the domain of Big Data Developer. Collaborate with cross-functional teams to achieve business goals. Ensure timely delivery of services and maintain high-quality standards. Required Qualifications: Proven experience in a relevant field or position. Strong understanding of the responsibilities and tools associated with the role. Excellent problem-solving and communication skills. Preferred Qualifications: Certifications or training relevant to Big Data Developer. Experience working in a fast-paced environment or large organizations.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, we transformed the solar industry with our revolutionary microinverter technology, turning sunlight into a safe, reliable, resilient, and scalable source of energy to power lives. Our Enphase Energy System assists individuals in making, using, saving, and selling their own power. We are proud to be one of the fastest-growing and innovative clean energy companies worldwide, with approximately 68 million products installed in over 145 countries. We are currently seeking an Engineer to join our Storage business unit's Data Analytics and Engineering division. We are looking for candidates with a strong technical background in data analytics and related domains, either through experience or education. This entry-level position offers the opportunity to gain valuable knowledge in Data Analysis, Engineering, and Data Science, paving the way for growth into a Data Scientist role and contributing to enhancing the experience of Enphase homeowners, installers, and utilities. As an Engineer at Enphase, you will play a crucial role in: - Assisting in the design and development of data infrastructure, including ETL pipelines, data storage, and data processing systems. - Developing and maintaining data models, data dictionaries, and other documentation related to data infrastructure. - Collaborating with cross-functional teams to understand business requirements and provide data-driven solutions. - Performing data quality checks and implementing monitoring to ensure data accuracy and completeness. - Actively engaging in technology and engineering problem-solving through Data Wrangling, Problem-Solving, and Experimental Design skills. - Taking ownership of significant projects, promoting consensus, closure, and clarity among teams while following agile methods. - Working closely with engineers across the organization, understanding their needs, and proposing solutions. - Setting new standards for sustainable engineering by enhancing best practices and creating world-class documentation, code, testing, and monitoring. Qualifications we are looking for: - B.E/B.Tech in Computer Science or Electrical Engineering from a top-tier college with >70% marks. - Proficiency in multiple data store systems, including relational and NoSQL databases, messaging queues, and Orchestration Frameworks (e.g., Airflow, Oozie). - Polyglot programming skills in at least 2 high-level languages (Java, Ruby, Python, JS, Go, Elixir). - Hands-on experience with fault-tolerant data engineering systems such as Hadoop, HDFS, Cassandra, MongoDB, Spark, etc. - Experience with at least 1 cloud platform (AWS, Microsoft Azure, GCP), preferably AWS. - Previous involvement in product teams, collaborating with engineers and product managers to define and execute data engineering and analytical solutions. - Familiarity with handling large scale, noisy, and unstructured data, with a preference for experience in time series data. - Passion for driving continual improvement initiatives in engineering best practices like coding, testing, and monitoring. - Excellent written and verbal communication skills, including the ability to create detailed technical documents. - Strong team player with the ability to work effectively both in a team setting and individually. - Capability to communicate and collaborate with external teams and stakeholders, with a growth mindset to learn and contribute across different modules or services. - Ability to thrive in a fast-paced environment, with experience in IoT based systems being advantageous. Join us at Enphase Energy and be part of our mission to advance a more sustainable future!,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle hundreds of terabytes of data daily and manage petabyte-sized data warehouses and Elasticsearch clusters. Your role will involve developing quality data solutions and simplifying existing datasets into self-service models. Additionally, you will create data pipelines that enhance data quality and are resilient to unreliable data sources. You will take ownership of data mapping, business logic, transformations, and data quality, and engage in low-level systems debugging and performance optimization on large production clusters. Your responsibilities will also include participating in architecture discussions, contributing to the product roadmap, and leading new projects. You will be involved in maintaining and supporting existing platforms, transitioning to newer technology stacks, and ensuring the evolution of the systems. To excel in this role, you must demonstrate proficiency in Python and PySpark, along with a deep understanding of Apache Spark, Spark tuning, RDD creation, and data frame building. Experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto is essential. Moreover, you should have expertise in building distributed environments using tools like Kafka, Spark, Hive, and Hadoop. A solid grasp of distributed database systems" architecture and functioning, as well as experience working with file formats like Parquet and Avro for handling large data volumes, will be beneficial. Familiarity with one or more NoSQL databases and cloud platforms like AWS and GCP is preferred. The ideal candidate for this role will have at least 5 years of professional experience as a data or software engineer. This position offers a challenging opportunity to work on cutting-edge technologies and contribute to the development of robust data processing systems. If you are passionate about data engineering and possess the required skills and experience, we encourage you to apply and join our dynamic team.,

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

hyderabad

Hybrid

Key Skills: Hadoop, Spark, Python, SQL, Scala, HDFS, Hive, Kafka, HBase Roles & Responsibilities: Design, develop, and maintain large-scale data processing systems using Hadoop and Spark. Write optimized Spark jobs using Scala to ensure efficient data processing. Implement and manage data pipelines and workflows using HDFS, Hive, Kafka, and HBase. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure high performance and responsiveness of applications by optimizing data processing tasks. Stay updated with the latest trends and technologies in big data and distributed computing. Experience Requirement: 5 -8 years of experience working with Hadoop and Spark for data engineering solutions. Strong expertise in developing Spark jobs using Scala and optimizing performance in distributed environments. Experience in building and maintaining data pipelines involving HDFS, Hive, Kafka, and HBase. Proficient in implementing best practices for big data application development and system scalability. Ability to work collaboratively with data engineers, analysts, and product teams to meet business goals. Education: Any Graduation.

Posted 1 week ago

Apply

9.0 - 14.0 years

20 - 35 Lacs

hyderabad

Hybrid

Your key responsibilities Work as a project manager to Lead the design and evolution of our large-scale data lakehouse architecture, ensuring it is scalable, reliable, and cost-effective. Provide technical leadership and mentorship to data engineers and analysts. Collaborate closely with Software Engineers, and business stakeholders to understand requirements and deliver effective data solutions. Experience in data modelling, data mapping, data profiling and meta data management. Architect and tune high-performance data processing pipelines. Identify and resolve complex performance issues in distributed computing environments involving Spark execution, Hive query optimization, and Iceberg metadata management. Expert Experience on Hadoop (HDFS, YARN), Hive (including LLAP, Tez), Spark (Structured Streaming, Spark SQL), and Apache Iceberg. Expert-level proficiency in building and optimizing large-scale data processing pipelines using Spark (PySpark/Scala). Deep understanding of Spark internals, execution plans, and tuning. Extensive experience in writing, optimizing, and managing HiveQL scripts. Deep knowledge of Hive architecture, file formats (ORC, Parquet), and performance tuning. Strong, hands-on experience with the core Hadoop ecosystem (HDFS, YARN, MapReduce). Understanding of cluster management and fundamentals. Hands-on experience designing and implementing data lakes using Apache Iceberg as the table format. Must understand features like schema evolution, hidden partitioning, time travel, and performance benefits over Hive tables. Experience in either Python (PySpark) or Scala. Mastery of SQL and experience optimizing complex queries on massive datasets. Experience with at least one major cloud platform (AWS (EMR, S3, Glue), Azure (Databricks, ADLS, Synapse), or GCP (Dataproc, BigQuery, GCS)). Interface and communicate with the onsite teams directly to understand the requirement and determine the optimum solutions. Create technical solutions as per business needs by translating their requirements and finding innovative solution options. Lead and mentor a team throughout design, development and delivery phases and keep the team intact on high pressure situations. Get involved in business development activities like creating proof of concepts (POCs), point of views (POVs), assist in proposal writing and service offering development, and capable of developing creative power point content for presentations. Create and maintain detailed architecture diagrams, data flow maps, and other technical documentation. Participate in organization-level initiatives and operational activities. Skills and attributes for success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Ideally, youll also have 8-10 years of experience in Banking and capital markets sector preferred Cloud architect certifications Experience using Agile methodologies. Experience with real-time stream processing technologies (Kafka, Flink, Spark Streaming). Experience with containerization and orchestration tools (Docker, Kubernetes). Experience with DevOps/DataOps principles and CI/CD pipelines for data projects. Please apply on the below link for the interview process: https://careers.ey.com/job-invite/1638141/

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The role requires strategic thinking and planning, providing expertise throughout the entire product development life cycle with a strong sense of quality ownership to ensure that quality is integrated from the initial stages. You will be responsible for ensuring that all development tasks meet quality criteria through proper planning, designing, quality assurance, and issue tracking. It is crucial to be accountable for mitigating business risks by executing and validating infrastructural changes in a timely and precise manner. Collaborating closely with development and infrastructure teams is essential to create enhancement and application fix plans. You will work under minimal supervision to carry out procedures and implement agreed changes to ensure that the application meets project engagement and requirements while complying with standards. You will also oversee the delivery and maintenance of automated scenarios and frameworks based on industry best practices. Requirements: - Hands-on & Consulting Skills on Base SAS development Framework. - Sound statistical knowledge, analytical, and problem-solving skills. - Hands-on experience with Big data technologies such as Apache Hadoop, HDFS, and Hive. - Experience with monitoring tools. - Development capabilities using Python, Spark, SAS, and R languages. - Strong management and analytical skills. - Proficient writing and oral communication skills. - Understanding of and experience in projects, including SDLC and Agile methodology. - Hands-on experience in the Big Data space, particularly the Hadoop Stack.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior Big Data Developer with 4 to 8 years of experience, you will be based in Bangalore and work in a hybrid mode. Your primary responsibilities will involve working with Big Data technologies and associated tools such as Hadoop, Unix, HDFS, Hive, Impala, etc. You should be proficient in using Spark/Scala and have experience in data Import/Export using Sqoop or similar tools. Additionally, you will be expected to have experience using Airflow, Jenkins, or similar automation tools. Excellent knowledge of SQL Server and database structures is essential, along with the ability to write and optimize T-SQL queries and stored procedures. Experience working with Jira, Confluence, GitLab, and other similar tools will be beneficial. To qualify for this role, you should hold a degree in BE/BTech/ MTech/ MCA and possess strong organizational skills. You must demonstrate the ability to manage multiple activities with changing priorities simultaneously. If you are looking for a challenging opportunity to showcase your expertise in Big Data and related technologies, this role is perfect for you.,

Posted 2 weeks ago

Apply

4.0 - 6.0 years

22 - 27 Lacs

bengaluru

Hybrid

Role: Scala Data Engineer Location: Bangalore (Hybrid, 3 days office/week) Experience: 4 6 years Employment Type: Full-Time Key Responsibilities: Develop, maintain, and optimize data pipelines using Scala & Spark Work with HDFS, Hive, Yarn to manage large datasets efficiently Build and manage workflows with Oozie Apply unit testing practices to ensure code quality and reliability Collaborate with Data & AI team to deliver scalable solutions Skills Required: Strong programming skills in Scala & Spark Hands-on with HDFS, Hive, Oozie, Yarn Solid experience in Unit Testing Good understanding of data engineering best practices Candidate Details (to be shared while applying): Total Experience Relevant Experience Current CTC Expected CTC Notice Period ( Immediate to 2 weeks – only apply if you can join early ) Current Location Preferred Location LinkedIn Profile URL Apply Now: Send your profile to Vijay.S@xebia.com

Posted 2 weeks ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

hyderabad

Hybrid

Understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Experience in HDFS, Hive, Impala Must have strong technical skills in Neo4j. Experience is schedulers like Airflow, Nifi etc. Experienced in Hadoop clustering and Auto scaling. Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud Monitoring health, tuning, and growth of such on-premises and cloud databases Experience in managing security and maintenance of snowflake databases Manage and monitor user access to the snowflake database Hands on exposure to ETL using Snowflake native services Hands on exposure to SQL & scripting Good understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies Design and Develop ETL pipelines in and out of data warehouse using Snowflakes Snow SQL Deep understanding of relational as well as NoSQL data stores methods and approaches star and snowflake dimensional modelling Experience in designing and deploying virtual warehouse, schema, views, zero-copy clone in Snowflake cloud data warehouse Good to have experience working with AWS services like Lambda, S3, DynamoDB, Glue, etc. Good to have experience working with Azure services like Azure Functions, Blob, ADF, DevOps, etc. Good to have experience working with PL-SQL & ETL tools like Informatica or DataStage Strong Communication skills Work experience with banking clients is an added advantage Hands on experience with Mattilion, Python, DBT preffered Please apply on the below link for interview process: https://careers.ey.com/job-invite/1632603/

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

bengaluru

Work from Office

":" Experience: Minimum 4+ years (relevant experience mandatory) Key Skills Required Strong hands-on experience with Scala (mandatory) and Apache Spark Experience with Hadoop ecosystem HDFS, Hive, Impala, Sqoop Data ingestion and pipeline development for large-scale systems Proficiency in Java and distributed data processing Knowledge of data warehousing and query optimization Job Description We are seeking a skilled Data Engineer (Spark & Scala) with hands-on expertise in big data technologies and large-scale data processing. The role involves building and optimizing data ingestion pipelines , working with the Hadoop ecosystem , and ensuring high-performance data workflows. Responsibilities Design, develop, and optimize data ingestion pipelines using Spark and Scala. Work with Hadoop ecosystem tools (HDFS, Hive, Impala, Sqoop) for large-scale data processing. Collaborate with cross-functional teams to integrate structured and unstructured data sources. Implement data transformation, validation, and quality checks. Optimize data workflows for scalability, performance, and fault tolerance. Write clean, efficient, and maintainable code in Scala and Java. Ensure compliance with best practices for data governance and security. Desired Candidate Profile Minimum 4+ years of experience in data engineering. Strong expertise in Scala (mandatory) and Apache Spark . Hands-on experience with Hadoop ecosystem tools (HDFS, Hive, Impala, Sqoop). Proficiency in Java for distributed system development. Strong problem-solving and analytical skills. Ability to work in fast-paced, collaborative environments. " , "Job_Opening_ID":"ZR_3382_JOB" , "Job_Type":"Contract" , "Job_Opening_Name":"Data Engineer (Spark & Scala)" , "State":"Karnataka" , "Currency":"INR" , "Country":"India" , "Zip_Code":"560001" , "id":"40099000030883728" , "Publish":true , "Keep_on_Career_Site":false , "Date_Opened":"2025-08-29"}]);

Posted 2 weeks ago

Apply

2.0 - 4.0 years

5 - 6 Lacs

bengaluru

Work from Office

2+ yrs exp as Data Engineer with strong skills in Big Data (HDFS/S3, Spark/Flink, Hive, HBase, Kafka/Kinesis), SQL & NoSQL (Elasticsearch, Cassandra, MongoDB), Airflow/Luigi, AWS/GCP, Java/Scala, stream processing & data modeling.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies