Home
Jobs

4958 Hadoop Jobs - Page 46

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Job Title: Senior Big Data Cloud QA Job Description: We are seeking an experienced Senior Big Data Cloud Quality Assurance Engineer to join our dynamic team. In this role, you will be responsible for ensuring the quality and performance of our big data applications and services deployed in cloud environments. You will work closely with developers, product managers, and other stakeholders to define testing strategies, develop test plans, and execute comprehensive testing processes. Key Responsibilities: Design and implement test plans and test cases for big data applications in cloud environments. Perform functional, performance, and scalability testing on large datasets. Identify, record, and track defects using bug tracking tools. Collaborate with development teams to understand product requirements and provide feed on potential quality issues early in the development cycle. Develop and maintain automated test scripts and frameworks for continuous integration and deployment. Analyze test results and provide detailed reports on the quality of releases. Mentor junior QA team members and share best practices in testing methodologies and tools. Stay updated on industry trends and advancements in big data and cloud technologies to continuously improve QA processes. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum of 5 years of experience in software testing, with at least 2 years focused on big data applications and cloud technologies. Proficiency in testing frameworks and tools, such as JUnit, TestNG, Apache JMeter, or similar. Experience with big data technologies, such as Hadoop, Spark, or distributed databases. Strong understanding of cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with programming languages such as Java, Python, or Scala. Excellent analytical and problem-solving skills, with a keen attention to detail. Strong communication skills, both verbal and written, along with the ability to work collaboratively in a team environment. If you are a motivated and detail-oriented professional looking to advance your career in big data quality assurance, we encourage you to for this exciting opportunity. Skills Required : ETL Testing, Bigdata, Database Testing, API Testing, Selenium, SQL, Linux, Cloud Testing Role : Job Title: Senior Big Data Cloud QA Roles and Responsibilities: 1. Design and implement comprehensive test plans and test cases for big data applications deployed in cloud environments. 2. Collaborate with data engineers and developers to understand system architecture and data flow for effective testing. 3. Perform manual and automated testing for big data processing frameworks and tools, ensuring data quality and integrity. 4. Lead and mentor junior QA team members, providing guidance on best practices for testing big data solutions. 5. Identify and document defects, track their resolution, and verify fixes in a timely manner. 6. Develop and maintain automated test scripts using appropriate testing frameworks compatible with cloud big data platforms. 7. Execute performance testing to assess the scalability and reliability of big data applications in cloud environments. 8. Participate in design and code reviews, providing insights on testability and quality. 9. Work with stakeholders to define acceptance criteria and ensure that deliverables meet business requirements. 10. Stay updated on industry trends and advancements in big data technologies and cloud services to continually improve testing processes. 11. Ensure compliance with security and data governance policies during testing activities. 12. Provide detailed reports and metrics on testing progress, coverage, and outcomes to project stakeholders. Experience : 5 to 7 years Job Reference Number : 12944

Posted 1 week ago

Apply

0.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Pune, Maharashtra, India Qualification : Job Descriptions for Technical Architect Position Summary: We are looking for candidates with hands on experience in Big Data and Cloud Technologies. Must have technical Skills 10+ Years of experience Expertise in designing and developing applications using Big Data and Cloud technologies – Must Have Expertise and hands-on experience* on Spark, and Hadoop echo system components – Must Have Expertise and hands-on experience* of any of the Cloud (AWS/Azure/GCP) – Must Have Good knowledge of Shell script & Java/Python – Must Have Good knowledge of migration projects on Hadoop – Good to Have Good Knowledge of one of the Workflow engine like Oozie, Autosys – Good to Have Good knowledge of Agile Development– Good to Have Passionate about exploring new technologies – Must Have Automation approach - – Good to Have Good Communication Skills – Must Have Data Ingestion, Processing and Orchestration knowledge Skills Required : Solution Architecting, Solution Design, orchestration, migration Role : Responsibilities Define Data Warehouse modernization approach and strategy for the customer Align the customer on the overall approach and solution Design systems for meeting performance SLA Resolve technical queries and issues for team Work with the team to establish an end-to-end migration approach for one use case so that the team can replicate the same for other iterations Experience : 10 to 15 years Job Reference Number : 12968

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India Qualification : 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Skills Required : Python, pyspark, SQL Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 6 to 8 years Job Reference Number : 13024

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078

Posted 1 week ago

Apply

0.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 30-Jun-2025 About the role The Data Analyst in the GRP team will be responsible to analyse complex datasets and make it consumable using visual storytelling and visualization tools such as reports and dashboards built using approved tools (Tableau, Microstrategy, PyDash). The ideal candidate will have a strong analytical mindset, excellent communication skills, and a deep understanding of reporting tools front end and back end What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Driving Data analysis for testing key business hypothesis and asks, developing complex visualizations, self-service tools and cockpits for answering recurring business asks and measurements Experience in handling quick turnaround business requests, managing stakeholder communication and solving business asks holistically going beyond the basic stakeholder asks Ability to select the right tools and techniques for solving the problem in hand Ensuring analysis, tools/ dashboards are developed with the right technical rigor meeting Tesco technical standards Applied experience in handling large data-systems and datasets Extensive experience in handling high volume, time pressured business asks and ad-hocs requests Ability to develop production ready visualization solutions and automated reports Contribute to development of knowledge assets and reusable modules on GitHub/Wiki Come up with new ideas and analysis to support business priorities and solve business problems You will need 5-8 years of experience as a Data Analyst, with experience working in domains like retail, cpg and for one of the following functional areas – Finacne, marketing, supply chain, customer, merchandising preferred Proven track record of handling ad-hoc analysis, developing dashboards and visualizations based business asks. Strong usage of business understanding for analysis asks. Exposure to analysis work within Retail domain; Space, Range, Merchandising, Store Ops, Forecasting, Customer Insights, Digital, Marketing will be preferred Expert Skills to analyze large datasets using Adv Excel, Adv SQL, Hive, Phython, Expert Skills to develop visualizations, self-service dashboards and reports using Tableau & PowerBi, Statistical Concepts (Correlation Analysis and Hyp. Testing), Strong DW concepts (Hadoop, Teradata), Excellent analytical and problem-solving skills. Should be comfortable dealing with variability Strong communication and interpersonal skills. About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore, Karnataka, India;Gurgaon, Haryana, India;Indore, Madhya Pradesh, India Qualification : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Job Description: Impetus Technologies is seeking a skilled Java + Bigdata Engineer to join our dynamic team. The ideal candidate will possess strong expertise in Java programming and have hands-on experience with Bigdata technologies. Responsibilities: Design, develop, and maintain robust big data applications using Java and related technologies. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Optimize application performance and scalability to handle large data sets effectively. Implement data processing solutions using frameworks such as Apache Hadoop, Apache Spark, or similar tools. Participate in code reviews, debugging, and troubleshooting of applications to ensure high-quality code standards. Stay updated with the latest trends and advancements in big data technologies and Java developments. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Strong proficiency in Java programming and experience with object-oriented design principles. Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, or similar frameworks. Familiarity with cloud platforms and data storage solutions (AWS, Azure, etc.). Excellent problem-solving skills and a proactive approach to resolving technical challenges. Strong communication and interpersonal skills, with the ability to work collaboratively in a team-oriented environment. At Impetus Technologies, we value innovation and encourage our employees to push boundaries. If you are a passionate Java + Bigdata Engineer looking to take your career to the next level, we invite you to and be part of our growing team. Skills Required : Java, spark, pyspark, Hive, microservices Role : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Roles and Responsibilities: Design, develop, and maintain scalable applications using Java and Big Data technologies. Collaborate with cross-functional teams to gather requirements and understand project specifications. Implement data processing and analytics solutions leveraging frameworks such as Apache Hadoop, Apache Spark, and others. Optimize application performance and ensure data integrity throughout the data lifecycle. Conduct code reviews and implement best practices to enhance code quality and maintainability. Troubleshoot and resolve issues related to application performance and data processing. Develop and maintain technical documentation related to application architecture, design, and deployment. Stay updated with industry trends and emerging technologies in Java and Big Data ecosystems. Participate in Agile development processes including sprint planning, log grooming, and daily stand-ups. Mentor junior engineers and provide technical guidance to ensure successful project delivery. Experience : 4 to 7 years Job Reference Number : 13044

Posted 1 week ago

Apply

0.0 - 18.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bengaluru, Karnataka, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Overall 10-18 yrs. of Data Engineering experience with Minimum 4+ years of hands on experience in Databricks. Ready to travel Onsite and work at client location. Proven hands-on experience as a Databricks Architect or similar role with a deep understanding of the Databricks platform and its capabilities. Analyze business requirements and translate them into technical specifications for data pipelines, data lakes, and analytical processes on the Databricks platform. Design and architect end-to-end data solutions, including data ingestion, storage, transformation, and presentation layers, to meet business needs and performance requirements. Lead the setup, configuration, and optimization of Databricks clusters, workspaces, and jobs to ensure the platform operates efficiently and meets performance benchmarks. Manage access controls and security configurations to ensure data privacy and compliance. Design and implement data integration processes, ETL workflows, and data pipelines to extract, transform, and load data from various sources into the Databricks platform. Optimize ETL processes to achieve high data quality and reduce latency. Monitor and optimize query performance and overall platform performance to ensure efficient execution of analytical queries and data processing jobs. Identify and resolve performance bottlenecks in the Databricks environment. Establish and enforce best practices, standards, and guidelines for Databricks development, ensuring data quality, consistency, and maintainability. Implement data governance and data lineage processes to ensure data accuracy and traceability. Mentor and train team members on Databricks best practices, features, and capabilities. Conduct knowledge-sharing sessions and workshops to foster a data-driven culture within the organization. Will be responsible for Databricks Practice Technical/Partnership initiatives. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Skills Required : Databricks, Unity Catalog, Pyspark, ETL, SQL, Delta Live Tables Role : Bachelor's or Master’s degree in Computer Science, Information Technology, or related field. In depth hands-on implementation knowledge on Databricks. Delta Lake, Delta table - Managing Delta Tables, Databricks Cluster Configuration, Cluster policies. Experience handling structured and unstructured datasets Strong proficiency in programming languages like Python, Scala, or SQL. Experience with Cloud platforms like AWS, Azure, or Google Cloud, and understanding of cloud-based data storage and computing services. Familiarity with big data technologies like Apache Spark, Hadoop, and data lake architectures. Develop and maintain data pipelines, ETL workflows, and analytical processes on the Databricks platform. Should have good experience in Data Engineering in Databricks Batch process and Streaming Should have good experience in creating Workflows & Scheduling the pipelines. Should have good exposure on how to make packages or libraries available in DB. Familiarity in Databricks default runtimes Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Experience : 10 to 18 years Job Reference Number : 12932

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T4(A) Job ID: R-45392-2025 Description & Requirements Introduction: A Career at HARMAN HARMAN Technology Services (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences About the Role We are seeking an experienced “Azure Data Architect” who will develop and implement data engineering project including enterprise data hub or Big data platform. Develop and implement data engineering project including data lake house or Big data platform What You Will Do Create data pipelines for more efficient and repeatable data science projects Design and implement data architecture solutions that support business requirements and meet organizational needs Collaborate with stakeholders to identify data requirements and develop data models and data flow diagrams Work with cross-functional teams to ensure that data is integrated, transformed, and loaded effectively across different platforms and systems Develop and implement data governance policies and procedures to ensure that data is managed securely and efficiently Develop and maintain a deep understanding of data platforms, technologies, and tools, and evaluate new technologies and solutions to improve data management processes Ensure compliance with regulatory and industry standards for data management and security. Develop and maintain data models, data warehouses, data lakes and data marts to support data analysis and reporting. Ensure data quality, accuracy, and consistency across all data sources. Knowledge of ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi. Experience with data modeling and design tools such as ERwin, PowerDesigner, or ER/Studio Knowledge of data governance, data quality, and data security best practices Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Familiarity with programming languages such as Python, Java, or Scala. Experience with data visualization tools such as Tableau, Power BI, or QlikView. Understanding of analytics and machine learning concepts and tools. Knowledge of project management methodologies and tools to manage and deliver complex data projects. Skilled in using relational database technologies such as MySQL, PostgreSQL, and Oracle, as well as NoSQL databases such as MongoDB and Cassandra. Strong expertise in cloud-based databases such as AWS 3/ AWS glue , AWS Redshift, Iceberg/parquet file format Knowledge of big data technologies such as Hadoop, Spark, snowflake, databricks , and Kafka to process and analyze large volumes of data. Proficient in data integration techniques to combine data from various sources into a centralized location. Strong data modeling, data warehousing, and data integration skills. What You Need 10+ years of experience in the information technology industry with strong focus on Data engineering, architecture and preferably as data engineering lead 8+ years of data engineering or data architecture experience in successfully launching, planning, and executing advanced data projects. Experience in working on RFP/ proposals, presales activities, business development and overlooking delivery of Data projects is highly desired A master’s or bachelor’s degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. Candidate should have demonstrated the ability to manage data projects and diverse teams. Should have experience in creating data and analytics solutions. Experience in building solutions with Data solutions in any one or more domains – Industrial, Healthcare, Retail, Communication Problem-solving, communication, and collaboration skills. Good knowledge of data visualization and reporting tools Ability to normalize and standardize data as per Key KPIs and Metrics Develop and implement data engineering project including data lakehouse or Big data platform Develop and implement data engineering project including data lakehouse or Big data platform What is Nice to Have Knowledge of Azure Purview is must Knowledge of Azure Data fabric Ability to define reference data architecture Snowflake Certified in SnowPro Advanced Certification Ability to define reference data architecture Cloud native data platform experience in AWS or Microsoft stack Knowledge about latest data trends including datafabric and data mesh Robust knowledge of ETL and data transformation and data standardization approaches Key contributor on growth of the COE and influencing client revenues through Data and analytics solutions Lead the selection, deployment, and management of Data tools, platforms, and infrastructure. Ability to guide technically a team of data engineers Oversee the design, development, and deployment of Data solutions Define, differentiate & strategize new Data services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around Data Network with domain experts Collaborate with client teams to understand their business challenges and needs. Develop and propose Data solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders Build re-usable Methodologies, Pipelines & Models What Makes You Eligible Build and manage a high-performing team of Data engineers and other specialists. Foster a culture of innovation and collaboration within the Data team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the Data team Collaborate with other stakeholders across the company to align the vision and goals Communicate and present the Data capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the Data domain What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.). Professional development opportunities through HARMAN University’s business and leadership academies. An inclusive and diverse work environment that fosters and encourages professional and personal development. “Be Brilliant” employee recognition and rewards program. You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Chennai Area

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

CryptoChakra is a premier cryptocurrency analytics and education platform dedicated to demystifying digital asset markets for global audiences. By leveraging cutting-edge AI algorithms and blockchain analytics, we empower traders, investors, and enthusiasts with actionable insights and predictive market intelligence. Our platform features curated educational resources, risk assessment frameworks, and real-time market analysis tools designed to elevate financial literacy and strategic decision-making. Committed to universal crypto accessibility, we combine innovative technology with user-centric learning experiences to bridge the knowledge gap between novice and expert users. As a remote-first organization, we prioritize collaboration, agility, and technological excellence to drive the future of decentralized finance. Position: Software Developer Intern Remote | Full-Time Internship I Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s dynamic engineering team to contribute to the development of our industry-leading cryptocurrency analytics platform. In this role, you will design, develop, and optimize scalable solutions for predictive modeling, sentiment analysis, and blockchain data processing. Key responsibilities include: Collaborating with cross-functional teams to enhance AI-driven predictive models and analytics tools Debugging, testing, and maintaining core platform features, including smart contract analytics and DeFi integrations Implementing data visualization modules using tools like Tableau and Python libraries Participating in agile workflows, including sprint planning and code reviews Researching emerging trends in blockchain technology and decentralized systems This internship offers hands-on experience in a fast-paced environment, mentorship from industry experts, and opportunities to impact real-world financial technologies. Qualifications Technical Skills Proficiency in Python, R, and scripting languages for data analysis and application development Experience with machine learning frameworks (TensorFlow, PyTorch) and NLP techniques Strong understanding of SQL/NoSQL databases and data pipeline optimization Familiarity with blockchain fundamentals, smart contracts, and tools like Etherscan or Blockchain.com Knowledge of cloud platforms (AWS, GCP) and version control systems (Git) Professional Competencies Analytical problem-solving skills with a focus on debugging and system optimization Ability to thrive in remote teams, manage priorities, and meet deadlines Excellent written and verbal communication for technical documentation and collaboration Curiosity about decentralized finance (DeFi) protocols, tokenomics, and blockchain consensus mechanisms Preferred Qualifications Academic or project experience in smart contract development (Solidity) Exposure to big data tools (Apache Spark, Hadoop) or distributed systems Academic Background Currently pursuing a Bachelor’s/Master’s degree in Computer Science, Data Science, Electrical Engineering, or a related technical discipline. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us JOB DESCRIPTION SBI Card is a leading pure-play credit card issuer in India, offering a wide range of credit cards to cater to diverse customer needs. We are constantly innovating to meet the evolving financial needs of our customers, empowering them with digital currency for seamless payment experience and indulge in rewarding benefits. At SBI Card, the motto 'Make Life Simple' inspires every initiative, ensuring that customer convenience is at the forefront of all that we do. We are committed to building an environment where people can thrive and create a better future for everyone. SBI Card is proud to be an equal opportunity & inclusive employer and welcome employees without any discrimination on the grounds of race, colour, gender, religion, creed, disability, sexual orientation, gender identity, marital status, caste etc. SBI Card is committed to fostering an inclusive and diverse workplace where all employees are treated equally with dignity and respect which makes it a promising place to work. Join us to shape the future of digital payment in India and unlock your full potential. What’s In It For YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose The Program Delivery Leader will be responsible for managing data initiatives/programs of data function. Role Accountability Defining standards and best practices for data analysis, modelling and queries by adopting strong data governance practices to ensure data accuracy, consistency and reliability Work closely with business teams and provide data requirements that may arise from new initiatives which may be required for analytics & reporting Lead design, development and implementation of D&A solutions through data driven analysis for achieving business goals and objectives by liaisoning between D&A and business teams Work with business teams and Data Lake technology team and lead the programs and data initiatives arising due to new needs arising from business, audits, regulatory Actively participate in the new product initiatives and provide data requirements to be implemented for NPIs and ensure that same are implemented for appropriate data insights and analytics Role is responsible for managing the data dictionary and ensuring it is always updated with the latest information, maintaining data quality and making data usable and accessible to all relevant stakeholders Role will be responsible for supporting data audits & its closure and implementation of compliance requirement for data platform. Additionally, role will help in managing budgets, policies & PMO for BIU function Ensure technical support is provided to Insights and reporting team wherever required to meet the data extraction & analysis requirements Person is required to build a strong understanding of data processes across the card lifecycle, how and where the data is stored across multiple layers of data platforms. Collaborate with senior leadership team, Function heads and BIU Program management team to understand their data needs and deliver the same through the implementation of data initiatives and projects As a People Manager of 10 team members, provide strategic direction, performance management and career development opportunities for team members by fostering a culture of driving data driven decision making Measures of Success Deliver data projects On Time and accurately to drive business decision making Maintain up to date data dictionary Deliver on data extraction and other service tickets within SLA Technical Skills / Experience / Certifications Good knowledge of SAS, Python, SQL & ETL technologies esp. in Big Data Environment Good working knowledge of BI tools like Tableau, Power BI, etc. Competencies critical to the role Person should have strong experience of leading teams preferably in BFSI segment Good knowledge of business processes & key business metrics to provide effective solutions Person should have good experience of data governance practices, and its related tools and processes required for maintaining data dictionary & good data quality standards Person is required to lead cross functional teams to execute data processing asks and hence he should be: Strong team player - Inclusive who can collaborate with multiple teams and drive them towards achieving a common goal Strong analytical skills – strong problem-solving skills, communicates in a clear and succinct manner and effectively evaluates information / data to make decisions; anticipates obstacles and develops plans to resolve Strong business acumen with the ability to understand and align with business goals Demonstrated customer focus – evaluates decisions through the eyes of the customer; builds strong relationships and creates processes which helps with timely availability of data to all stakeholders Should have very good written and verbal communication skills Qualification Graduate or Postgraduate in Computer Science, Data Science, Statistics, Data Analytics or related fields from good institute. Desired - Analytics certifications like Certified Analytics Professional (CAP), Google Data Analytics Professional, etc. Have experience of working on Data Lake using Hadoop or Data bricks. Have used data tools like Collibra to achieve data quality Preferred Industry BFSI Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us Zupee is India’s fastest growing innovator in real money gaming with a focus on skill based games on mobile platform. Started by 2 IIT-K alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures & Smile Group with an aspiration to become the most trusted and responsible entertainment company in the world. To know more about our recent funding coverage: https://bit.ly/3AHmSL3 Our focus has been on innovating in the board, strategy and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play. Role – Data Engineering We are looking for someone to develop the next generation of our Data platform collaborating across functions like product, marketing, design, innovation/growth, strategy/scale, customer experience, data science & analytics and technology. Core Responsibilities ● Understand, implement and automate ETL and data pipelines with up-to-date industry standards ● Hands-on involvement in the design, development and implementation of optimal and scalable AWS services What are we looking for? ● S/he must have experience in Python ● S/he must have experience in Big Data – Spark, Hadoop, Hive, HBase and Presto ● S/he must have experience in Data Warehousing ● S/he must have experience in building reliable and scalable ETL pipelines Qualifications and Skills ● 2-4 years of professional experience in data engineering profile ● BS or MS in Computer Science or similar Engineering stream ● Hands-on experience in data warehousing tools ● Knowledge of distributed systems such as Hadoop, Hive, Spark and Kafka etc. ● Experience with AWS services (EC2, RDS, S3, Athena, data pipeline/glue, lambda, dynamodb etc. Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Adform focuses on an exploration of a vast amount of online advertising data, development of predictive and segmentation models to optimize campaign strategy towards the most meaningful inventory, audience and ad, fraud detection, and many others. You will be a vital part of the data scientist efforts, working in a cross-functional international Agile team, with other data scientists, engineers, product managers, and leaders located in multiple locations around the world. If you are hands-on passionate, crazy about a development of advanced algorithms, machine learning, statistics, Hadoop, Python and SQL, have an ownership mindset, love to work in teams with smart, informal and open people, we’d love to get to know you! Our focus is Big Data, High Load and challenging experiences with understanding and taking advantage from data in Ad-Tech industry. We hope you are ready to change the game! Be ready to: Apply advanced machine learning/statistical algorithms scalable to huge data sets to: Determine the most meaningful ad, served to the right user at the optimal time, and the best price Identify behaviours, interests and segments of web users across billions of transactions to find the most optimal audience for a given advertising activity Eliminate suspicious / non-human traffic Maintenance our products, support customers in analysing the reasons behind the decisions made by our algorithms and look for new improvements in the process of striving for their excellence Work closely with other Data Scientists, development, and product teams to implement algorithms into production-level software. Mentor less experienced colleagues. Contribute to identify opportunities for leveraging company data to drive business solutions Be an active technical challenger in the team for the purpose of mutual improvement and broadening of team and company horizons Design solutions and lead cross-functional technical projects from ideation to deployment Attitude First. Everything else will follow. We can grow together faster if you have: Minimum 1 year work experience in a similar senior position. Minimum 6 years experience in Data Science in total. Masters or PhD in a quantitative field such as Machine Learning, Computer Science, Applied Mathematics, Statistics or related. Excellent mathematical and statistical skills (statistical inference), experience in working with large datasets. Knowledge of data pipelines, ETL processes. Very good knowledge of multiply supervised and unsupervised machine learning techniques with math background and hands-on experience Great problem-solving and analytical skills. Ability to structure a large business problem into tractable and reasonable components, design and deploy scalable machine learning solutions Proficiency in Python, SQL Experience with big data tools (e.g. Spark, Hadoop) Adform‘s guiding principles ingrained in your profile: Focus on Client Value , Behave with Decency , Take Ownership , Care, Collaborateas a TeamPlayer , Remain Ambitious , and StandTall inall interactions Stand out by having: Knowledge of Ad Tech industry solutions Knowledge of web technologies Our promise to you: You will be a vital part of the data scientist efforts, working in a cross-functional international Agile team, with other data scientists, engineers, product managers, and leaders located in multiple locations around the world. A dynamic, inspiring, and international environment filled with ambitious and caring colleagues Premium health insurance with ₹10,00,000 coverage for you and your family 24 paid vacation days to enjoy life outside of work Paid maternity (6 months) and paternity (2 weeks) Annual learning budget to help you grow your skills and support continuous learning Rewarding referral program for helping us grow Global perks such as birthday gifts, work anniversaries, and company events to connect with colleagues and have fun And much more – join us to explore the full experience Diversity s Inclusion @Adform: Adform is an equal opportunity employer with a global perspective. We remain committed to creating an environment that not only respects different backgrounds but celebrates them too. We believe that diversity in all its forms enhances our teams’ creativity, innovation, and effectiveness, and therefore we value different backgrounds, perspectives, and skills. We are committed to creating a work environment where Adform employees feel valued for who they are and what they can contribute, free from any type of discrimination. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Title: AI/ML Developer Location: Bangalore Experience: 4+ Years CTC: Up to 30 LPA Industry: AI Product Key Responsibilities: -Design and deploy ML models focused on NLP and Computer Vision. -Handle data labeling, preprocessing, and model validation. -Assist in API development to integrate ML models into apps. -Fine-tune and train models to improve performance. -Collaborate with teams to deliver practical AI-driven solutions. -Maintain documentation of model processes and outcomes. Required Skills & Qualification: -4+ years in AI/ML development with hands-on work in NLP or CV. -Strong Python skills with libraries like TensorFlow, PyTorch, scikit-learn. -Experience in data preprocessing and model deployment. -Exposure to cloud platforms (AWS/GCP/Azure) is a plus. -Familiarity with MLOps and API integration is desirable. -Degree in Computer Science, Data Science, or related field. -Knowledge of big data tools (Spark/Hadoop) or other languages (R/Java/C++) is an advantage. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

India

On-site

Linkedin logo

Title : Solution Data Architect Experience: 12 to 16 years Loc: Pan India (Pune/Mumbai/Hyderabad/banagalore/Kolkata/Chennai/Noida) Interview Mode: 1 Virtual , 1 F2F Mandatory skills- AWS (8 yr exp), Databricks (6 to 8 yr), data modelling(5 yr) , any NO SQL scrip (5yr )+ Solutioning - (4 yr) Job Summary: We are looking for an experienced Data Architect with strong expertise in AWS , Databricks , Data Modeling , and NoSQL technologies (preferably MongoDB). The ideal candidate should have a solid background in data architecture, solution design, and data governance, with the ability to lead data projects end-to-end. Key Responsibilities: Design and develop scalable Data Warehouses and Data Lakes on cloud platforms, primarily AWS. Perform data modeling for both structured and semi-structured data using tools such as Erwin and Hackolade . Act as the Solution Architect for data initiatives—drive end-to-end architecture and design solutions. Implement and maintain data governance practices, including data cataloguing , privacy , security , and access management . Collaborate with Business Analysts and Project Managers to gather requirements, perform discovery, and define optimal data solutions. Lead and support the development of data pipelines and ensure seamless integration with development teams. Provide technical leadership and guidance to data engineers and developers. Required Skills: Strong hands-on experience with AWS cloud services. Expertise in Databricks for building and managing big data solutions. Proven experience in data modeling (both structured and semi-structured). Experience with NoSQL databases , especially MongoDB (preferred). Deep understanding of data governance frameworks and tools. 7+ years of experience as a Data Architect . 10+ years of total IT experience in data-centric roles. Strong knowledge of Big Data technologies such as Hadoop . Excellent communication, stakeholder management, and documentation skills. Ability to work independently and take ownership of project delivery. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description We are seeking a highly experienced Senior Data Modeler with strong expertise in Data Vault modeling and data architecture. The ideal candidate will be responsible for analyzing complex business requirements and designing scalable and efficient data models that align with organizational goals. Key Responsibilities: Analyze and translate business requirements into long-term data solutions. Design and implement conceptual, logical, and physical data models. Develop and apply transformation rules to ensure accurate data mapping across systems. Collaborate with development teams to define data flows and modeling strategies. Establish best practices for data design, coding, and documentation. Review and enhance existing data models for performance and compatibility. Optimize local and metadata models to improve system efficiency. Apply canonical modeling techniques to ensure data consistency. Troubleshoot and fine-tune data models for optimal performance. Conduct regular assessments of data systems for accuracy, variance, and performance. Technical Skills Required: Proven experience in Data Vault modeling (mandatory). Strong knowledge of relational and dimensional data modeling (OLTP/OLAP). Hands-on experience with modeling tools such as Erwin , ER/Studio , Hackolade , Visio , or Lucidchart . Proficient in SQL and experienced with RDBMS such as Oracle , SQL Server , MySQL , and PostgreSQL . Exposure to NoSQL databases like MongoDB and Cassandra . Experience with data warehouses and BI tools such as Snowflake , Redshift , Databricks , Qlik , and Power BI . Familiarity with ETL processes , data integration , and data governance frameworks . Preferred Qualifications: Minimum 12 years of experience in Data Modeling or Data Engineering. At least 5 years of hands-on experience with relational and dimensional modeling. Strong understanding of metadata management and related tools. Knowledge of transactional databases , data warehousing , and real-time data processing . Experience working with cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Databricks). Relevant certifications in Data Management, Data Modeling, or Cloud Data Engineering are a plus. Excellent communication, presentation, and interpersonal skills. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role – Deep Learning Engineer & Data Scientist Job Description Location - PAN India Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. Roles And Responsibilities An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in GS. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to GS products can be determined. Qualification Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall, 7 – 12 years of experience with a minimum of 5 years in developing Java-based applications. Essential Skills Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem-solving skills and attention to detail Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2), Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Please note that our firm has adopted a COVID-19 vaccination requirement for employees who work onsite at any of our U.S. locations to safeguard the health and well-being of all our employees and others who enter our U.S. offices. This role requires the employee to be able to work on-site. As a condition of employment, employees working on-site at any of our U.S. locations are required to be fully vaccinated for COVID-19, and to have either had COVID-19 or received a booster dose if eligible under Centers for Disease Prevention and Control (CDC) guidance, unless prohibited by applicable federal, state, or local law. Applicants who wish to request for a medical or religious accommodation, or any other accommodation required under applicable law, can do so later in the process. Please note that accommodations are not guaranteed and are decided on a case by case basis. © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

About Us All people need connectivity. The Rakuten Group is reinventing telecom by greatly reducing cost, rewarding big users not penalizing them, empowering more people and leading the human centric AI future. The mission is to connect everybody and enable all to be. Rakuten. Telecom Invented. Job Description Why should you choose us? Rakuten Symphony is reimagining telecom, changing supply chain norms and disrupting outmoded thinking that threatens the industry’s pursuit of rapid innovation and growth. Based on proven modern infrastructure practices, its open interface platforms make it possible to launch and operate advanced mobile services in a fraction of the time and cost of conventional approaches, with no compromise to network quality or security. Rakuten Symphony has operations in Japan, the United States, Singapore, India, South Korea, Europe, and the Middle East Africa region. For more information, visit: https://symphony.rakuten.com Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! About Rakuten Rakuten Group, Inc. (TSE: 4755) is a global leader in internet services that empower individuals, communities, businesses and society. Founded in Tokyo in 1997 as an online marketplace, Rakuten has expanded to offer services in e-commerce, fintech, digital content and communications to approximately 1.5 billion members around the world. The Rakuten Group has over 27,000 employees, and operations in 30 countries and regions. For more information visit https://global.rakuten.com/corp/. Job Summary The AI Architect is a senior technical leader responsible for designing and implementing the overall AI infrastructure and architecture for the organization. This role will define the technical vision for AI initiatives, select appropriate technologies and platforms, and ensure that AI systems are scalable, reliable, secure, and aligned with business requirements. The AI Architect will work closely with CTO Office, product manager, engineering manager, data scientists, machine learning engineers, and other stakeholders to build a robust and efficient AI ecosystem. Mandatory Skills Cloud Computing Platforms (AWS, Azure, GCP) AI/ML Frameworks (TensorFlow, PyTorch, scikit-learn) Data Engineering Tools (Spark, Hadoop, Kafka) Microservices Architecture AI/ML as a service Deployment DevOps Principles (CI/CD/CT) Strong understanding of AI/ML algorithms and techniques Excellent communication and leadership skills Roles & Responsibilities Define the overall AI architecture and infrastructure strategy for the organization. Select appropriate technologies and platforms for AI development and deployment. Design scalable, reliable, and secure AI systems. Develop and maintain architectural blueprints and documentation. Provide technical leadership and guidance to tech lead, engineering manager, data scientists, machine learning engineers, and other stakeholders. Ensure that AI systems are aligned with business requirements and industry best practices. Evaluate new AI technologies and trends. Collaborate with security and compliance teams to ensure that AI systems meet regulatory requirements. Collaborate with CTO Office to ensure the AI strategy implemented aligned with overall business unit strategy. Job Requirement Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 10+ years of experience in software architecture, with a focus on AI/ML. Experience designing and implementing large-scale AI systems. Strong understanding of cloud computing, data engineering, and DevOps principles. Excellent communication, leadership, and problem-solving skills. Experience with Agile development methodologies. Relevant certifications (e.g., AWS Certified Solutions Architect, Google Cloud Certified Professional Cloud Architect) are a plus. Experience with agentic AI Implementation is a plus. RAG and agentic frameworks hands on experience is a plus Rakuten Shugi Principles Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance. Only be satisfied with complete success - Kaizen. Be passionately professional. Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction. The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process Manager Roles And Responsibilities Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical And Functional Skills Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Show more Show less

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies