Home
Jobs

339 Mapreduce Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 7 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Big Data Engineer Experience: 7-9 Years of Experience. Preferred location: Hyderabad Must have Skills: Bigdata, AWS cloud, Java/Scala/Python, Ci/CD Good to Have Skills: Relational Databases (any), No SQL databases (any), Microservices or Domain services or API gateways or similar, Containers (Docker, K8s, etc) Required Skills Big Data,Aws Cloud,CI/CD,Java/Scala/Python

Posted 1 month ago

Apply

2 - 5 years

6 - 10 Lacs

Gurugram

Work from Office

Naukri logo

KDataScience (USA & INDIA) is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 month ago

Apply

3 - 7 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 month ago

Apply

7 - 12 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with team members, and making key decisions to ensure project success. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process effectively Ensure timely delivery of projects Provide guidance and mentorship to team members Professional & Technical Skills: Must To Have Skills: Proficiency in PySpark Strong understanding of big data processing Experience with data manipulation and transformation Hands-on experience in building scalable applications Knowledge of cloud platforms and services Additional Information: The candidate should have a minimum of 7.5 years of experience in PySpark This position is based at our Bengaluru office A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Chennai, Tamil Nadu

Work from Office

Indeed logo

Big-Data Administrator :- a) Full time B.Tech / B.E / MCA / MS(IT) from a recognized Institution / University b) Experience - 3+ d) Preferably - Relevant Cloudera Certifications. Experience of managing and administrating Hadoop systems (Cloudera) like Cluster management, System administration, security and data management, trouble shooting, monitoring and support. Experience in node-based data processing, dealing with large amount of data. Experience of Hadoop, HIVE, MapReduce, HBase, Java etc. Job Types: Full-time, Permanent Pay: ₹10,552.72 - ₹69,025.33 per month Schedule: Day shift Morning shift Work Location: In person Application Deadline: 29/05/2025

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Chennai, Tamil Nadu, India

Hybrid

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE,Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is dedicated to delivering top-tier HR services and talent acquisition strategies to help companies throughout India thrive. Our mission is to connect skilled professionals with excellent opportunities, fostering a culture of collaboration, integrity, and innovation. We pride ourselves on understanding the unique needs of both our clients and candidates, ensuring a perfect fit for every role. At Viraaj HR Solutions, we prioritize our people's growth and development, making us a dynamic and rewarding workplace. Role Responsibilities Design and implement scalable Big Data solutions using Hadoop technologies. Develop and maintain ETL processes to manage and process large data sets. Collaborate with data architects and analysts to gather requirements and deliver solutions. Optimize existing Hadoop applications for maximizing efficiency and performance. Write, test, and maintain complex SQL queries to extract and manipulate data. Implement data models and strategies that accommodate large-scale data processing. Conduct data profiling and analysis to ensure data integrity and accuracy. Utilize MapReduce frameworks to execute data processing tasks. Work closely with data scientists to facilitate exploratory data analysis. Ensure compliance with data governance and privacy regulations. Participate in code reviews and maintain documentation for all development processes. Troubleshoot and resolve performance bottlenecks and other technical issues. Stay current with technology trends and tools in Big Data and Cloud platforms. Train junior developers and assist in their professional development. Contribute to team meeting discussions regarding project status and ideas for improvement. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Proven experience as a Big Data Developer or similar role. Strong foundational knowledge of the Hadoop ecosystem (HDFS, Hive, Pig, etc.). Proficient in programming languages such as Java, Python, or Scala. Experience with database management systems (SQL and NoSQL). Familiar with data processing frameworks like Apache Spark. Understanding of data pipeline architectures and data integration techniques. Knowledge of cloud computing services (AWS, Azure, or Google Cloud). Exceptional problem-solving skills and attention to detail. Strong communication skills and ability to work in a team environment. Experience with data visualization tools (Tableau, Power BI, etc.) is a plus. Ability to work under pressure and meet tight deadlines. Adaptability to new technologies and platforms as they emerge. Certifications in Big Data technologies would be an advantage. Willingness to learn and grow in the field of data sciences. Skills: data visualization,spark,java,cloud computing (aws, azure, google cloud),sql proficiency,apache spark,data warehousing,data visualization tools,nosql,scala,python,etl,sql,hadoop,mapreduce,big data analytics,big data Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for the design, development, and maintenance of a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with key business stakeholders, IT experts, and subject-matter experts to plan, design, and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Though the role category is generally listed as Remote, this specific position is designated as Hybrid. Key Responsibilities Business Alignment & Collaboration – Partner with the Product Owner to align data solutions with strategic goals and business requirements. Data Pipeline Development & Management – Design, develop, test, and deploy scalable data pipelines for efficient data transport into Cummins Digital Core (Azure DataLake, Snowflake) from various sources (ERP, CRM, relational, event-based, unstructured). Architecture & Standardization – Ensure compliance with AAI Digital Core and AAI Solutions Architecture standards for data pipeline design and implementation. Automation & Optimization – Design and automate distributed data ingestion and transformation systems, integrating ETL/ELT tools and scripting languages to ensure scalability, efficiency, and quality. Data Quality & Governance – Implement data governance processes, including metadata management, access control, and retention policies, while continuously monitoring and troubleshooting data integrity issues. Performance & Storage Optimization – Develop and implement physical data models, optimize database performance (indexing, table relationships), and operate large-scale distributed/cloud-based storage solutions (Data Lakes, Hadoop, HBase, Cassandra, MongoDB, Accumulo, DynamoDB). Innovation & Tool Evaluation – Conduct proof-of-concept (POC) initiatives, evaluate new data tools, and provide recommendations for improvements in data management and integration. Documentation & Best Practices – Maintain standard operating procedures (SOPs) and data engineering documentation to support consistency and efficiency. Agile Development & Automation – Use Agile methodologies (DevOps, Scrum, Kanban) to drive automation in data integration, preparation, and infrastructure management, reducing manual effort and errors. Coaching & Team Development – Provide guidance and mentorship to junior team members, fostering skill development and knowledge sharing. Responsibilities Competencies: System Requirements Engineering: Translates stakeholder needs into verifiable requirements, tracks status, and assesses impact changes. Collaborates: Builds partnerships and works collaboratively with others to meet shared objectives. Communicates Effectively: Delivers multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and provides customer-centric solutions. Decision Quality: Makes good and timely decisions that drive the organization forward. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Develops, tests, and maintains code using industry standards, version control, and automation tools. Quality Assurance Metrics: Measures and assesses solution effectiveness using IT Operating Model (ITOM) standards. Solution Documentation: Documents knowledge gained and communicates solutions for improved productivity. Solution Validation Testing: Validates configurations and solutions to meet customer requirements using SDLC best practices. Data Quality: Identifies, corrects, and manages data flaws to support effective governance and decision-making. Problem Solving: Uses systematic analysis to determine root causes and implement robust solutions. Values Differences: Recognizes and leverages the value of diverse perspectives and cultures. Education, Licenses, Certifications Bachelor's degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Qualifications Preferred Experience: Technical Expertise – Intermediate experience in data engineering with hands-on knowledge of SPARK, Scala/Java, MapReduce, Hive, HBase, Kafka, and SQL. Big Data & Cloud Solutions – Proven ability to design and develop Big Data platforms, manage large datasets, and implement clustered compute solutions in cloud environments. Data Processing & Movement – Experience developing applications requiring large-scale file movement and utilizing various data extraction tools in cloud-based environments. Business & Industry Knowledge – Familiarity with analyzing complex business systems, industry requirements, and data regulations to ensure compliance and efficiency. Analytical & IoT Solutions – Experience building analytical solutions with exposure to IoT technology and its integration into data engineering processes. Agile Development – Strong understanding of Agile methodologies, including Scrum and Kanban, for iterative development and deployment. Technology Trends – Awareness of emerging technologies and trends in data engineering, with a proactive approach to innovation and continuous learning. Technical Skills Programming Languages: Proficiency in Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Hands-on experience with Hadoop, Spark, Kafka, and similar frameworks. Cloud Services: Experience with Azure, Databricks, and AWS platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus. API Integration: Experience working with APIs to consume data from ERP and CRM systems. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2410681 Relocation Package No Show more Show less

Posted 1 month ago

Apply

2 - 5 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

TCS is Hiring !!! Location :Bangalore, Chennai, Kolkata, Hyderabad, Pune Exp : 4-8 Yrs Functional Skills: Experience in Credit Risk/Regulatory risk domainTechnical Skills: Spark ,PySpark, Python, Hive, Scala, MapReduce, Unix shell scripting Good to Have Skills: Exposure to Machine Learning Techniques Job Description: 4+ Years of experience with Developing/Fine tuning and implementing programs/applicationsUsing Python/PySpark/Scala on Big Data/Hadoop Platform. Roles and Responsibilities: a) Work with a Leading Bank’s Risk Management team on specific projects/requirements pertaining to risk Models inconsumer and wholesale bankingb) Enhance Machine Learning Models using PySparkc) Work with Data Scientists to Build ML Models based on Business Requirements and Follow ML Cycle to Deploy them allthe way to Production Environmentd) Participate Feature Engineering, Training Models, Scoring and retraininge) Architect Data Pipeline and Automate Data Ingestion and Model Jobs Skills and competencies:Required:· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performanceData and macro-economic data to solve business problems.· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes inCredit Risk/Banking· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.· Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.· Experience in systems integration, web services, batch processing· Experience in migrating codes to PySpark/Scala is big Plus· The ability to act as liaison conveying information needs of the business to IT and data constraints to the businessapplies equal conveyance regarding business strategy and IT strategy, business processes and work flow· Flexibility in approach and thought process· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Posted 1 month ago

Apply

5 - 10 years

15 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

Linkedin logo

About Company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: ETL Testing + Java + API Testing + UNIX Commands · Location: Pune [Kharadi] (Hybrid) · Experience: 7 + yrs [With 7+ Relevant Experience in ETL Testing] · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills : ETL Testing, Java, API Testing, UINX Commands Job Description: 1) Key Responsibilities: ! Extensive experience in validating ETL processes, ensuring accurate data extraction, transformation, and loading across multiple environments. ! Proficient in Java programming, with the ability to understand and write Java code when required. ! Advanced skills in SQL for data validation, querying databases, and ensuring data consistency and integrity throughout the ETL process. ! Expertise in utilizing Unix commands to manage test environments, handle file systems, and execute system-level tasks. ! Proficient in creating shell scripts to automate testing processes, enhancing productivity and reducing manual intervention. ! Ensuring that data transformations and loads are accurate, with strong attention to identifying and resolving discrepancies in the ETL process. ! Focused on automating repetitive tasks and optimizing testing workflows to increase overall testing efficiency. ! Write and execute automated test scripts using Java to ensure the quality and functionality of ETL solutions. ! Utilize Unix commands and shell scripting to automate repetitive tasks and manage system processes. ! Collaborate with cross-functional teams, including data engineers, developers, and business analysts, to ensure the ETL processes meet business requirements. ! Ensure that data transformations, integrations, and pipelines are robust, secure, and efficient. ! Troubleshoot data discrepancies and perform root cause analysis for failed data loads. ! Create comprehensive test cases, execute them, and document test results for all data flows. ! Actively participate in the continuous improvement of ETL testing processes and methodologies. ! Experience with version control systems (e.g., Git) and integrating testing into CI/CD pipelines. 2) Tools & Technologies (Good to Have): # Experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, and Spark for handling large-scale data processing and storage. # Knowledge of NiFi for automating data flows, transforming data, and integrating different systems seamlessly. # Experience with tools like Postman, SoapUI, or RestAssured to validate REST and SOAP APIs, ensuring correct data exchange and handling of errors.

Posted 1 month ago

Apply

7 - 12 years

50 - 75 Lacs

Bengaluru

Work from Office

Naukri logo

---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders ---- Preferred Qualifications ---- Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production

Posted 1 month ago

Apply

4 - 9 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Data Engineer Summary Apply Now Full-Time 4+ years Responsibilities Design, develop, and maintain data pipelines and ETL processes. Build and optimize data architectures for analytics and reporting. Collaborate with data scientists and analysts to support data-driven initiatives. Implement data security and governance best practices. Monitor and troubleshoot data infrastructure and ensure high availability. Qualifications Design, develop, and maintain data pipelines and ETL processes. Build and optimize data architectures for analytics and reporting. Collaborate with data scientists and analysts to support data-driven initiatives. Implement data security and governance best practices. Monitor and troubleshoot data infrastructure and ensure high availability. Skills Proficiency in data engineering tools (Hadoop, Spark, Kafka, etc.). Strong SQL and programming skills (Python, Java, etc.). Experience with cloud platforms (AWS, Azure, GCP). Knowledge of data modeling, warehousing, and ETL processes. Strong problem-solving and analytical abilities.

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

Linkedin logo

Key Result Areas And Activities ETL Pipeline Development and Maintenance Design, develop, and maintain ETL pipelines using Cloudera tools such as Apache NiFi, Apache Flume, and Apache Spark. Create and maintain comprehensive documentation for data pipelines, configurations, and processes. Data Integration and Processing Integrate and process data from diverse sources including relational databases, NoSQL databases, and external APIs. Performance Optimization Optimize performance and scalability of Hadoop components (HDFS, YARN, MapReduce, Hive, Spark) to ensure efficient data processing. Identify and resolve issues related to data pipelines, system performance, and data integrity. Data Quality and Transformation Implement data quality checks and manage data transformation processes to ensure accuracy and consistency. Data Security and Compliance Apply data security measures and ensure compliance with data governance policies and regulatory requirements. Essential Skills Proficiency in Cloudera Data Platform (CDP) - Cloudera Data Engineering. Proven track record of successful data lake implementations and pipeline development. Knowledge of data lakehouse architectures and their implementation. Hands-on experience with Apache Spark and Apache Airflow within the Cloudera ecosystem. Proficiency in programming languages such as Python, Java, Scala, and Shell. Exposure to containerization technologies (e.g., Docker, Kubernetes) and system-level understanding of data structures, algorithms, distributed storage, and compute. Desirable Skills Experience with other CDP services like Dataflow, Stream Processing Familiarity with cloud environments such as AWS, Azure, or Google Cloud Platform Understanding of data governance and data quality principles CCP Data Engineer Certified Qualifications 7+ years of experience in Cloudera/Hadoop/Big Data engineering or related roles Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Qualities Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively. Able to work in a self-organized and cross-functional team. Able to iterate based on new information, peer reviews, and feedback. Able to work seamlessly with clients across multiple geographies. Research focused mindset. Proficiency in English (read/write/speak) and communication over email. Excellent analytical, presentation, reporting, documentation, and interactive skills.

Posted 1 month ago

Apply

5 - 8 years

8 - 30 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is dedicated to delivering top-tier HR services and talent acquisition strategies to help companies throughout India thrive. Our mission is to connect skilled professionals with excellent opportunities, fostering a culture of collaboration, integrity, and innovation. We pride ourselves on understanding the unique needs of both our clients and candidates, ensuring a perfect fit for every role. At Viraaj HR Solutions, we prioritize our people's growth and development, making us a dynamic and rewarding workplace. Role Responsibilities Design and implement scalable Big Data solutions using Hadoop technologies.Develop and maintain ETL processes to manage and process large data sets.Collaborate with data architects and analysts to gather requirements and deliver solutions.Optimize existing Hadoop applications for maximizing efficiency and performance.Write, test, and maintain complex SQL queries to extract and manipulate data.Implement data models and strategies that accommodate large-scale data processing.Conduct data profiling and analysis to ensure data integrity and accuracy.Utilize MapReduce frameworks to execute data processing tasks.Work closely with data scientists to facilitate exploratory data analysis.Ensure compliance with data governance and privacy regulations.Participate in code reviews and maintain documentation for all development processes.Troubleshoot and resolve performance bottlenecks and other technical issues.Stay current with technology trends and tools in Big Data and Cloud platforms.Train junior developers and assist in their professional development.Contribute to team meeting discussions regarding project status and ideas for improvement. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field.Proven experience as a Big Data Developer or similar role.Strong foundational knowledge of the Hadoop ecosystem (HDFS, Hive, Pig, etc.).Proficient in programming languages such as Java, Python, or Scala.Experience with database management systems (SQL and NoSQL).Familiar with data processing frameworks like Apache Spark.Understanding of data pipeline architectures and data integration techniques.Knowledge of cloud computing services (AWS, Azure, or Google Cloud).Exceptional problem-solving skills and attention to detail.Strong communication skills and ability to work in a team environment.Experience with data visualization tools (Tableau, Power BI, etc.) is a plus.Ability to work under pressure and meet tight deadlines.Adaptability to new technologies and platforms as they emerge.Certifications in Big Data technologies would be an advantage.Willingness to learn and grow in the field of data sciences. Skills: data visualization,spark,java,cloud computing (aws, azure, google cloud),sql proficiency,apache spark,data warehousing,data visualization tools,nosql,scala,python,etl,sql,hadoop,mapreduce,big data analytics,big data

Posted 1 month ago

Apply

6 - 11 years

19 - 27 Lacs

Haryana

Work from Office

Naukri logo

About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 1 month ago

Apply

4 - 8 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing!! Role: Big Data Developer Experience Required :4 to 8 yrs Work Location : Bangalore/Chennai/Pune/Delhi/Hyderabad/Kochi Required Skills, Spark and Scala Interested candidates can send resumes to nandhini.s@spstaffing.in

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 1 month ago

Apply

3 - 7 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer - Java/Scala development, Hadoop, Spark Overview The Loyalty Rewards & Segments program is looking for a Senior Software Engineer to drive the solutions to enable our customers to optimize their loyalty programs from beginning to end. We build and manage global solutions that enable merchants and issuers to offer points, miles, or cashback benefits seamlessly to their cardholders. Customers are able to build and implement custom promotions using our sophisticated rules engine to target spend categories or target increasing engagement with cardholders. The ideal candidate is passionate about the technology, proven records of developing high quality, secure code that is modular, functional and testable. Role The Senior Software Engineer will be responsible for development solutions with high level of innovation, high quality and faster time to market. This position interacts with product managers, engineering leaders, architects and software developers and business operations on the definition and delivery of highly scalable and secure solutions. The Role Includes Hands-on developer who writes high quality, secure code that is modular, functional and testable. Create or introduce, test, and deploy new technology to optimize the service Contribute to all parts of the software’s development including design, development, documentation, and testing. Have strong ownership Communicate, collaborate and work effectively in a global environment. Responsible for ensuring application stability in production by creating solutions that provide operational health. Mentoring and leading new developers while driving modern engineering practices. Communicate, collaborate and work effectively in a global environment All About You Strong analytical and excellent problem-solving skills and experience working in an Agile environment. Experience with XP, TDD and BDD in the software development processes Proficiency in Java, Scala & SQL (Oracle, Postgres, H2, Hive, & HBase) & building pipelines Expertise and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/Flume, Data processing framework like Spark & Cloud platform, Orchestration Tools - Apache Nifi / Airflow, Apache Kafka Expertise in Web applications (Springboot Angular, Java, PCF), Web Services (REST/OAuth), and Big Data Technologies (Hadoop, Spark, Hive, HBase) and tools ( Sonar, Splunk, Dynatrace) Expertise SQL, Oracle and Postgres Experience in microservices, event driven architecture Soft skills: strong verbal and written communication to demo features to product owners; strong leadership quality to mentor and support junior team members, proactive and has initiative to take development work from inception to implementation. Familiar with secure coding standards (e.g., OWASP, CWE, SEI CERT) and vulnerability management Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices;Ensure the confidentiality and integrity of the information being accessed;Report any suspected information security violation or breach, andComplete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-246311

Posted 2 months ago

Apply

6 - 9 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

3 - 5 years

5 - 7 Lacs

Mysore

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

2 - 3 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Job ID/Reference Code INFSYS-NAUKRI-210683 Work Experience 2-3 Job Title Spark Developer Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Big Data - Data Processing->Spark Preferred Skills: Technology->Big Data - Data Processing->Spark Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit * Location of posting is subject to business requirements

Posted 2 months ago

Apply

3 - 5 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Job ID/Reference Code INFSYS-NAUKRI-210690 Work Experience 3-5 Job Title Spark Developer Responsibilities Spark Expertise Expert proficiency in Spark Ability to design and implement efficient data processing workflows Experience with Spark SQL and DataFrames Good exposure to Big Data architectures and good understanding of Big Data eco system Experience with some framework building experience on Hadoop Good with DB knowledge with SQL tuning experience. Good to have experience with Python, APIs and exposure to Kafka. Technical and Professional Requirements: Primary skills:Technology->Big Data - Data Processing->Spark Preferred Skills: Technology->Big Data - Data Processing->Spark Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom,BSc Service Line Data & Analytics Unit * Location of posting is subject to business requirements

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies