Jobs
Interviews

344 Hdfs Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Intermediate Programmer Analyst position at our organization entails actively participating in the establishment and execution of new or modified application systems and programs in collaboration with the Technology team. Your primary goal in this role is to contribute to applications systems analysis and programming activities. Your responsibilities will include utilizing your knowledge of applications development procedures and concepts, as well as basic understanding of other technical areas, to identify and define necessary system enhancements. This will involve using script tools, analyzing code, and interpreting code. You will also be expected to consult with users, clients, and various technology groups to address issues, suggest programming solutions, and provide installation and support for customer exposure systems. Additionally, applying fundamental knowledge of programming languages for design specifications and analyzing applications to identify vulnerabilities and security issues will be part of your role. You will also be responsible for conducting testing, debugging, and serving as an advisor or coach to new or lower-level analysts. Your role will require you to identify problems, analyze information, and make evaluative judgments to recommend and implement solutions. You will need to resolve issues by identifying and selecting solutions through the application of acquired technical experience and guided by precedents. Furthermore, you should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. It is crucial that you appropriately assess risk when making business decisions and demonstrate particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets. Key Responsibilities: - Design and implement ETL pipelines using PySpark and Big Data tools on platforms like Hadoop, Hive, HDFS, etc. - Write scalable Python code for Machine Learning preprocessing tasks and work with libraries such as pandas, Scikit-learn, etc. - Develop data pipelines to support model training, evaluation, and inference. Skills required for this role include proficiency in Python programming with experience in PySpark for large-scale data processing, hands-on experience in Big Data technologies such as Hadoop, Hive, HDFS, exposure to machine learning workflows, model lifecycle, and data preparation, experience with ML libraries like Scikit-learn, XGBoost, Tensorflow, PyTorch, and exposure to cloud platforms (AWS/GCP) for data and AI workloads. Qualifications: - 4-8 years of relevant experience in the Financial Service industry - Intermediate level experience in Applications Development role - Demonstrated clear and concise written and verbal communication skills - Proven problem-solving and decision-making abilities - Ability to work under pressure, manage deadlines, and adapt to unexpected changes in expectations or requirements Education: - Bachelor's degree/University degree or equivalent experience This job description serves as a comprehensive overview of the primary responsibilities and requirements for the role. Other job-related duties may be assigned as necessary.,

Posted 2 months ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W

Posted 2 months ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Chennai

Work from Office

Mandatory Skill Set Big Data developer Interested can share your updated resume to karthigaa.chinnasamy@aspiresys.com Thanks & Regards Karthigaa Chinnasamy | HR - Talent Acquisition Mobile: +91- 9092938886 Website: www.aspiresys.com | Blog: http://blog.aspiresys.com

Posted 2 months ago

Apply

8.0 - 10.0 years

32 - 35 Lacs

Hyderabad

Work from Office

Position Summary MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights Role Value Proposition MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases. Job Responsibilities Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes Develop metadata and configuration based reusable frameworks to reduce the development effort Develop quality code with integral performance optimizations in place right at the development stage. Collaborate with global team in driving the delivery of projects and recommend development and performance improvements. Extensive experience of various databases types and knowledge to leverage the right one for the need. Strong understanding of data tools and ability to leverage them to understand the data and generate insights Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases) Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions. Education, Technical Skills & Other Critical Requirement Education Bachelors degree in computer science, Engineering, or related discipline Experience (In Years) 8 to 10 years of working experience on Azure Cloud using Databricks or Synapse Technical Skills Experience in transforming data using Python, Spark or Scala Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks . Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations. Scripting experience primarily on shell/bash/PowerShell would be desirable. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance. Other Preferred Skills Expertise in Python and experience writing Azure functions using Python/Node.js Experience using Event Hub for data integrations . Required working knowledge of Azure DevOps pipelines Self-starter and ability to adapt with changing business needs

Posted 2 months ago

Apply

5.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a seasoned Testing Sr. Manager in Treasury and FP&A Technology, you will play a crucial role in defining, planning, and executing the testing automation strategy for the Global Funds Transfer Pricing Application. Your expertise in automation tools, agile methodologies, and quality engineering best practices will be instrumental in transforming and enhancing the current testing automation landscape. Your responsibilities will include continuously monitoring automation coverage, enhancing the existing automation framework, designing scalable automation frameworks for UI, API, and data validation testing on Big Data/Hadoop platform, collaborating with various teams to integrate automation into the agile SDLC, and improving efficiency in regression and end-to-end testing using automation. You will also be responsible for developing robust test scripts, maintaining automation suites, improving overall test coverage and release quality, establishing and tracking key QA metrics, advocating for best practices in test automation, driving the adoption of AI/ML-based testing tools, and managing, mentoring, and upskilling a team of test engineers in automation practices. Qualifications for this role include 12+ years of experience in functional and non-functional software testing, 5+ years of experience as a Test Automation Lead, expertise in test automation frameworks/tools like Jenkins, Selenium, Cucumber, TestNG, Junit, and Cypress, strong programming skills in Java, Python, or any other scripting language, expertise in SQL, experience with API testing tools and performance testing tools, familiarity with Agile, Scrum, and DevOps practices, knowledge of functional test tools like JIRA, and familiarity with cloud-based test execution and big data testing. Preferred qualifications include certifications such as ISTQB Advanced, Certified Agile Tester, or Selenium WebDriver certification, exposure to banking/financial domains, communication and diplomacy skills, and a passion for automation in quality engineering. A Bachelors/University degree is required, with a Masters degree preferred. If you are motivated by innovation, self-driven, results-oriented, and have a strong desire to excel in the field of test automation within the banking/financial industry, this role offers an exciting opportunity to lead and drive quality engineering practices within a dynamic and collaborative environment.,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer II at JPMorgan Chase within the Employee Platforms team, you will have the opportunity to enhance your software engineering career while working with a team of agile professionals. Your main responsibility will be to design and deliver cutting-edge technology products in a secure, stable, and scalable manner. You will play a crucial role in developing technology solutions across different technical areas to support the firm's business objectives. Your key responsibilities will include executing innovative software solutions, developing high-quality production code, and identifying opportunities to enhance operational stability. You will lead evaluation sessions with external vendors and internal teams to drive architectural designs and technical applicability. Additionally, you will collaborate with various teams to drive feature development and produce documentation of cloud solutions. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 2 years of practical experience. You must possess advanced skills in system design, application development, and testing. Proficiency in programming languages, automation, and continuous delivery methods is essential. An in-depth understanding of agile methodologies, such as CI/CD, Application Resiliency, and Security, is required. Knowledge in Python, Big Data technologies, and financial services industry IT systems will be advantageous. Your success in this role will depend on your ability to innovate, collaborate with stakeholders, and excel in a diverse and improvement-focused environment. You should have a strong track record of technology implementation projects, along with expertise in software applications and technical processes within a technical discipline. Preferred skills include teamwork, initiative, and knowledge of financial instruments and specific programming languages like Core Java 8, Spring, JPA/Hibernate, and React JavaScript.,

Posted 2 months ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

NTT DATA is looking for a Data Ingest Engineer to join the team in Pune, Mahrshtra (IN-MH), India (IN). As a Data Ingest Engineer, you will be part of the Ingestion team of the DRIFT data ecosystem, focusing on ingesting data in a timely, complete, and comprehensive manner using the latest technology available to Citi. Your role will involve leveraging new and creative methods for repeatable data ingestion from various sources while ensuring the highest quality data is provided to downstream partners. Responsibilities include partnering with management teams to integrate functions effectively, identifying necessary system enhancements for new products and process improvements, and resolving high impact problems/projects through evaluation of complex business processes and industry standards. You will provide expertise in applications programming, ensure application design aligns with the overall architecture blueprint, and develop standards for coding, testing, debugging, and implementation. Additionally, you will analyze issues, develop innovative solutions, and mentor mid-level developers and analysts. The ideal candidate should have 6-10 years of experience in Apps Development or systems analysis, with extensive experience in system analysis and programming of software applications. Proficiency in Application Development using JAVA, Scala, Spark, familiarity with event-driven applications and streaming data, and experience with various schema, data types, ELT methodologies, and formats are required. Experience working with Agile and version control tool sets, leadership skills, and clear communication abilities are also essential. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. With experts in more than 50 countries and a strong partner ecosystem, NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success. As a part of the NTT Group, NTT DATA invests significantly in R&D to support organizations and society in moving confidently into the digital future. For more information, visit us at us.nttdata.com.,

Posted 2 months ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Talend ETL Lead, you will be responsible for leading the design and development of scalable ETL pipelines using Talend, integrating with big data platforms, and mentoring junior developers. This is a high-impact, client-facing role requiring hands-on leadership and solution ownership. Lead the end-to-end development of ETL pipelines using Talend Data Fabric. Collaborate with data architects and business stakeholders to understand requirements. Build and optimize data ingestion, transformation, and loading processes. Ensure high performance, scalability, and reliability of data solutions. Mentor and guide junior developers in the team. Troubleshoot and resolve ETL-related issues quickly. Manage deployments and promote code through different environments. Qualifications: - 7+ years of experience in ETL/Data Engineering. - Strong hands-on experience with Talend Data Fabric. - Solid understanding of SQL, Hadoop ecosystem (HDFS, Hive, Pig, etc.). - Experience building robust data ingestion pipelines. - Excellent communication and leadership skills.,

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Looking for: Hadoop Developer Experience: 3 to 6 years Requirement 1: Primary Skills: Apache Hadoop and Apache Spark Required Skills Must be strong in Hadoop and Spark Architecture Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have strong hands-on experience on Hive/Impala/Spark query performance tuning concepts Good UNIX Shell, Python/Scala scripting knowledge Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools Familiarity with Java, Spark Strong collaboration and communication skills Must possess strong team-player skills and should have excellent written and verbal communication skills Ability to create and maintain a positive environment of shared success. Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor. Requirement 2: Primary Skills: Apache Hadoop and Unix Shell scripting Required Skills Must have knowledge in Hadoop Architecture Strong in UNIX Shell scripting Hands-on knowledge on how Database systems and SQL querying Strong in logical reasoning capabilities Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools Familiarity with HDFS, Hive, Strong collaboration and communication skills Must possess strong team-player skills and should have excellent written and verbal communication skills Ability to create and maintain a positive environment of shared success. Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor. Spark, Shell Script, Hadoop

Posted 2 months ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 3-5 years of experience, you will be responsible for working with a range of technical skills including Azure Data Factory, Talend/SSIS, MSSQL, Azure, and MySQL. Your primary focus will be on Azure Data Factory, where you will utilize your expertise to handle complex data analysis tasks effectively. In this role, you will demonstrate advanced knowledge in Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. It is essential that you possess a solid understanding of Azure Data Lake and Azure Services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD processes. Furthermore, your responsibilities will include mastering data management, data warehousing, and business intelligence architecture. You will be required to apply your experience in data modeling and database design, ensuring compliance with SQL Server best practices. Effective communication is key in this role, as you will engage with stakeholders at various levels. You will contribute to the preparation of design documents, unit test plans, and code review reports. Experience in an Agile environment, specifically with Scrum, Lean, or Kanban methodologies, will be advantageous. Additionally, familiarity with Big Data technologies such as the Spark Framework, NoSQL databases, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) will be beneficial for this position.,

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be joining Lifesight as a Data Engineer in our Bengaluru office, playing a pivotal role in the Data and Business Intelligence organization. Your primary focus will be on leading deep data engineering projects and contributing to the growth of our data platform team. This is an exciting opportunity to shape our technical strategy and foster a strong data engineering team culture in India. As a Data Engineer at Lifesight, you will be responsible for designing and constructing data platforms and services, managing data infrastructure in cloud environments, and enabling strategic business decisions across Lifesight products. Your role will involve building highly scalable, fault-tolerant distributed data processing systems, optimizing data quality in pipelines, and owning data mapping, transformations, and business logic. You will also engage in low-level system debugging, performance optimization, and actively participate in architecture discussions to drive new projects forward. The ideal candidate for this position will possess proficiency in Python and PySpark, along with a deep understanding of Apache Spark, Spark tuning, and building data frames. Experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, and Airflow, as well as NoSQL databases and cloud platforms like AWS and GCP, are essential. You should have at least 5 years of professional experience in data or software engineering, demonstrating expertise in data quality, data engineering, and various big data frameworks and tools. In summary, as a Data Engineer at Lifesight, you will have the opportunity to work on cutting-edge data projects, collaborate with a talented team of engineers, and contribute to the ongoing success and innovation of Lifesight's data platform.,

Posted 2 months ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Detailed job description - Skill Set: Technically strong hands-on Self-driven Good client communication skills Able to work independently and good team player Flexible to work in PST hour(overlap for some hours) Past development experience for Cisco client is preferred.

Posted 2 months ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for managing one or more applications to achieve established goals and handle personnel duties for your team, including hiring and training. Your role involves designing and developing real-time and batch data transformation processes using a variety of technologies such as Hadoop, Spark Stream, Spark SQL, Python, and Hive. You will also design and develop programs to enhance functionalities in the next-generation Big Data platform and ensure data redistribution is authorized. As a Big Data Developer with 8-10 years of relevant experience, you must possess strong skills in Java/J2EE, Hadoop, Scala, Hive, Impala, Kafka, and Elastic to address data concerns and implement data remediation requirements. Your role will require you to have a good understanding of design patterns and the ability to provide solutions to complex design issues, as well as identify and resolve code issues. You will be hands-on in managing application development using Spark (Scala, Python, or Java), SQL, and the Linux-based Hadoop ecosystem (HDFS, Impala, Hive, HBase, etc.). Your experience as a senior-level professional in an Applications Development role and your proven Solution Delivery skills will be essential in this position. Additionally, you should have a basic knowledge of finance industry practices and standards. Excellent analytical and process-based skills are required, including expertise in process flow diagrams, business modeling, and functional design. Being dynamic, flexible, and maintaining a high energy level is crucial as you will be working in a demanding and rapidly changing environment. Your educational background should include a Bachelor's degree/University degree or equivalent experience.,

Posted 2 months ago

Apply

8.0 - 13.0 years

0 Lacs

kochi, kerala

On-site

As a Senior Technical Analyst at Maxwell GeoSystems, based in Kochi, Kerala, India, you will play a crucial role in the development and implementation of Company-wide SOA (Service-oriented Architecture) for instrumentation and construction monitoring SaaS (Software as a Service). Your primary focus will be on planning, runtime design, and integration of software services for data handling and transformation. Working under the guidance of the IT Head, you will collaborate with a diverse team of Senior System Developers, Programmers, and Management Executives to ensure the success of projects throughout their life cycle. Leveraging the latest web technologies, you will strive to achieve optimal results and contribute to the company's mission of driving digitalization in the ground engineering industry. Your responsibilities will include developing the logical and physical layout of the overall solution and its components, mediating between business and technology, transforming business operations concepts into IT infrastructure terms, and defining service interface contracts through data and function modeling techniques. You will also work closely with the Architect to create these contracts, investigate service orchestration possibilities, define technical process flows, and create and test software implementations. To excel in this role, you should possess 8-13 years of experience in handling large data and have a strong understanding of technologies such as Cassandra, Neo4J, HDFS, MYSQL, REACTJS, PYTHON, GOLANG, AWS, AZURE, and MongoDB. Additionally, you should have knowledge of common web server exploits and their solutions, fundamental design principles for scalable applications, integration of multiple data sources and databases, creation of database schemas supporting business processes, familiarity with SQL/NoSQL databases, proficiency in code versioning tools like GIT, and the ability to prioritize and execute tasks effectively in a high-pressure environment. Strong written communication skills are essential for this role. If you are ready to join a market-defining company and contribute to the advancement of ground engineering through innovative technology, we encourage you to send your CV to recruitment@maxwellgeosystems.com. Become a part of Maxwell GeoSystems and help us make a real difference in performance and advancement through our revolutionary software, MissionOSa powerful data management system for geotechnical and project-related data acquisition, monitoring, and analysis.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Big Data Developer, you will be responsible for leveraging your strong experience in big data technologies and associated tools such as Hadoop, Unix, HDFS, Hive, and Impala. Your proficiency in using Spark/Scala and experience with data Import/Export using Sqoop or similar tools will be crucial in this role. Additionally, you will be expected to have experience with tools like Airflow, Jenkins, or similar automation tools. Your excellent knowledge of SQL Server and database structures will be essential for writing and optimizing T-SQL queries and stored procedures. Experience working with Jira, Confluence, and GitLab will also be beneficial in this role. Your organizational skills and ability to handle multiple activities with changing priorities simultaneously will be highly valued. As part of the delivery team, your primary responsibilities will include ensuring effective design, development, validation, and support activities to meet client satisfaction in the technology domain. You will gather requirements, understand client needs, and translate them into system requirements. Additionally, you will play a key role in estimating work requirements and providing project estimations to Technology Leads and Project Managers. You will be a key contributor to building efficient programs/systems and collaborating with other Big Data developers to ensure consistency in data solutions. Your ability to partner with the business community, perform technology research, and evaluate new technologies will be crucial in enhancing the overall capability of the analytics technology stack. Key Responsibilities: - Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. - Work with other Big Data developers to ensure consistency in data solutions. - Partner with the business community to understand requirements, determine training needs, and deliver user training sessions. - Perform technology and product research to define requirements, resolve issues, and enhance the analytics technology stack. - Evaluate and provide feedback on future technologies and new releases/upgrades. Job Specific Knowledge: - Support Big Data and batch/real-time analytical solutions using transformational technologies. - Work on multiple projects as a technical team member or drive user requirement analysis, design, development, testing, and automation tools. Professional Attributes: - Good communication skills. - Team player willing to collaborate throughout all phases of development, testing, and deployment. - Ability to solve problems and meet deadlines with minimal supervision. If you believe you have the skills and experience to contribute effectively to our clients" digital transformation journey, we welcome you to join our team.,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a PySpark Data Engineer, you must have a minimum of 2 years of experience in PySpark. Strong programming skills in Python, PySpark, and Scala are preferred. It is essential to have experience in designing and implementing CI/CD, Build Management, and Development strategies. Additionally, familiarity with SQL and SQL Analytical functions is required, along with participation in key business, architectural, and technical decisions. There is an opportunity for training in AWS cloud technology. In the role of a Python Developer, a minimum of 2 years of experience in Python/PySpark is necessary. Strong programming skills in Python, PySpark, and Scala are preferred. Experience in designing and implementing CI/CD, Build Management, and Development strategies is essential. Familiarity with SQL and SQL Analytical functions and participation in key business, architectural, and technical decisions are also required. There is a potential for training in AWS cloud technology. As a Senior Software Engineer at Capgemini, you should have over 3 years of experience in Scala with a strong project track record. Hands-on experience in Scala/Spark development and SQL writing skills on RDBMS (DB2) databases are crucial. Experience in working with different file formats like JSON, Parquet, AVRO, ORC, and XML is preferred. Previous involvement in a HDFS platform development project is necessary. Proficiency in data analysis, data profiling, and data lineage, along with strong oral and written communication skills, is required. Experience in Agile projects is a plus. For the position of Data Modeler, expertise in data structures, algorithms, calculus, linear algebra, machine learning, and modeling is essential. Knowledge of data warehousing concepts such as Star schema, snowflake, or data vault for data mart or data warehousing is required. Proficiency in using data modeling software like Erwin, ER studio, MySQL Workbench to produce logical and physical data models is necessary. Hands-on knowledge and experience with tools like PL/SQL, PySpark, Hive, Impala, and other scripting tools are preferred. Experience with Software Development Lifecycle using the Agile methodology is essential. Strong communication and stakeholder management skills are crucial for this role. In this role, you will design, develop, and optimize PL/SQL procedures, functions, triggers, and packages. You will also write efficient SQL queries, joins, and subqueries for data retrieval and manipulation. Additionally, you will develop and maintain database objects such as tables, views, indexes, and sequences. Optimizing query performance and troubleshooting database issues to improve efficiency are key responsibilities. Collaboration with application developers, business analysts, and system architects to understand database requirements is essential. Ensuring data integrity, consistency, and security within Oracle databases is also a crucial aspect of the role. Developing ETL processes and scripts for data migration and integration are part of the responsibilities. Documenting database structures, stored procedures, and coding best practices is required. Staying up-to-date with Oracle database technologies, best practices, and industry trends is essential for success in this role.,

Posted 2 months ago

Apply

2.0 - 5.0 years

5 - 11 Lacs

Chennai

Hybrid

Job Posting: Support Analyst Big Data & Application Support (Chennai) Location: Chennai, India (Prefers only chennai based candidates ) Experience: 2 to 5 years Employment Type: Full-Time | Hybrid Model Department: Digital Technology Services IT Digital Function: DaaS (Data as a Service), AI & RPA Support * Note : Only candidates with above criteria will be contacted for further process ! Role Overview We are looking for a Support Analyst to join our dynamic DTS IT Digital team in Chennai. In this role, you will support and maintain data platforms, AI/RPA systems, and big data ecosystems. You'll play a key part in production support, rapid incident recovery, and platform improvements, working with global stakeholders. Key Responsibilities Serve as L2/L3 support and point of contact for global support teams Perform detailed root cause analysis (RCA) and prevent incident recurrence Maintain, monitor, and support big data platforms and ETL tools Coordinate with multiple teams for incident and change management Contribute to disaster recovery planning, resiliency events, and capacity management Document support processes, fixes, and participate in monthly RCA reviews Technical Skills Required Proficient in Unix/Linux command line, basic Windows server operations Hands-on with big data and ETL tools such as: Hadoop, MapR, HDFS, Spark, Apache Drill, Yarn, Oozie Ab Initio, Alteryx, Spotfire Strong SQL skills and understanding of data processing Familiarity with problem/change/incident management processes Good scripting knowledge (Shell/Python – optional but preferred) What We’re Looking For Bachelor's degree in Computer Science, IT, or a related field 2 to 5 years of experience in application support or big data platform support Ability to communicate technical issues clearly to non-technical stakeholders Strong problem-solving skills and a collaborative mindset Experience in banking, financial services, or enterprise-grade systems is a plus Why Join Us? Be part of a global innovation and technology team Opportunity to work on AI, RPA, and large-scale data platforms Hybrid work culture with strong global collaboration Career development in a stable and inclusive banking giant Ready to Apply? If you're a passionate technologist with strong support experience and big data platform knowledge, we want to hear from you!

Posted 2 months ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Requirement : Immediate or Max 15 days Job Description : Big Data Developer (Hadoop/Spark/Kafka) - This role is ideal for an experienced Big Data developer who is confident in taking complete ownership of the software development life cycle - from requirement gathering to final deployment. - The candidate will be responsible for engaging with stakeholders to understand the use cases, translating them into functional and technical specifications (FSD & TSD), and implementing scalable, efficient big data solutions. - A key part of this role involves working across multiple projects, coordinating with QA/support engineers for test case preparation, and ensuring deliverables meet high-quality standards. - Strong analytical skills are necessary for writing and validating SQL queries, along with developing optimized code for data processing workflows. - The ideal candidate should also be capable of writing unit tests and maintaining documentation to ensure code quality and maintainability. - The role requires hands-on experience with the Hadoop ecosystem, particularly Spark (including Spark Streaming), Hive, Kafka, and Shell scripting. - Experience with workflow schedulers like Airflow is a plus, and working knowledge of cloud platforms (AWS, Azure, GCP) is beneficial. - Familiarity with Agile methodologies will help in collaborating effectively in a fast-paced team environment. - Job scheduling and automation via shell scripts, and the ability to optimize performance and resource usage in a distributed system, are critical. - Prior experience in performance tuning and writing production-grade code will be valued. - The candidate must demonstrate strong communication skills to effectively coordinate with business users, developers, and testers, and to manage dependencies across teams. Key Skills Required : Must Have : - Hadoop, Spark (core & streaming), Hive, Kafka, Shell Scripting, SQL, TSD/FSD documentation. Good to Have : - Airflow, Scala, Cloud (AWS/Azure/GCP), Agile methodology. This role is both technically challenging and rewarding, offering the opportunity to work on large-scale, real-time data processing systems in a dynamic, agile environment.

Posted 2 months ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Hyderabad

Remote

Data Engineering / Big Data part time Work from Home (Any where in world) Warm Greetings from Excel Online Classes, We are a team of industry professionals running an institute that provides comprehensive online IT training, technical support, and development services. We are currently seeking Data Engineering / Big Data Experts who are passionate about technology and can collaborate with us in their free time. If you're enthusiastic, committed, and ready to share your expertise, we would love to work with you! Were hiring for the following services: Online Training Online Development Online Technical Support Conducting Online Interviews Corporate Training Proof of Concept (POC) Projects Research & Development (R&D) We are looking for immediate joiners who can contribute in any of the above areas. If you're interested, please fill out the form using the link below: https://docs.google.com/forms/d/e/1FAIpQLSdvut0tujgMbBIQSc6M7qldtcjv8oL1ob5lBc2AlJNRAgD3Cw/viewform We also welcome referrals! If you know someone—friends, colleagues, or connections—who might be interested in: Teaching, developing, or providing tech support online Sharing domain knowledge (e.g., Banking, Insurance, etc.) Teaching foreign languages (e.g., Spanish, German, etc.) Learning or brushing up on technologies to clear interviews quickly Upskilling in new tools or frameworks for career growth Please feel free to forward this opportunity to them. For any queries, feel free to contact us at: excel.onlineclasses@gmail.com Thank you & Best Regards, Team Excel Online Classes excel.onlineclasses@gmail.com

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune

Work from Office

Description: Hiring Data Engineer with AWS or GCP Cloud Requirements: Role Summary: The Data Engineer will be responsible for designing, implementing, and maintaining the data infrastructure and pipelines necessary for AI/ML model training and deployment. They will work closely with data scientists and engineers to ensure data is clean, accessible, and efficiently processed Required Experience: • 6-8 years of experience in data engineering, ideally in financial services. • Strong proficiency in SQL, Python, and big data technologies (e.g., Hadoop, Spark). • Experience with cloud platforms (e.g., AWS, Azure, GCP) and data warehousing solutions. • Familiarity with ETL processes and tools. • Knowledge of data governance, security, and compliance best practices. Job Responsibilities: Key Responsibilities: • Build and maintain scalable data pipelines for data collection, processing, and analysis. • Ensure data quality and consistency for training and testing AI models. • Collaborate with data scientists and AI engineers to provide the required data for model development. • Optimize data storage and retrieval to support AI-driven applications. • Implement data governance practices to ensure compliance and security. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a highly motivated Senior Data Engineer with expertise in designing, building, and securing data systems. The ideal candidate will have a strong background in data engineering, security compliance, and distributed systems, with a focus on ensuring adherence to industry standards and regulatory requirements. Location: Bangalore Experience: 4 to 13 Years Must Have: Informatica BDM, Oozie Scheduling, Hive, HDFS Key Responsibilities Design, implement, and maintain secure data systems, including wrapper solutions for components with minimal security controls, ensuring compliance with bank standards. Identify security design gaps in existing and proposed architectures and recommend enhancements to strengthen system resilience. Develop and enforce security controls for data transfers, including CRON, ETLs, and JDBC-ODBC scripts. Ensure compliance with data sensitivity standards, such as avoiding storage of card numbers or PII in logs, and maintaining data integrity. Collaborate on distributed systems, focusing on resiliency, monitoring, and troubleshooting in production environments. Work with Agile/DevOps practices, CI/CD pipelines (GitHub, Jenkins), and scripting tools to optimize data workflows. Troubleshoot and resolve issues in large-scale data infrastructures, including SQL/NoSQL databases, HDFS, Hive, and HQL. Requirements -5+ years of total experience, with4+ years in Informatica Big Data Management. Extensive knowledge of Oozie scheduling, HQL, Hive, HDFS, and data partitioning. Proficiency in SQL and NoSQL databases, along with Linux OS configuration and shell scripting. Strong understanding of networking concepts (DNS, Proxy, ACL, Policy) and data transfer security. In-depth knowledge of compliance and regulatory requirements (encryption, anonymization, policy controls). Familiarity with Agile/DevOps, CI/CD, and distributed systems monitoring. Ability to address data sensitivity concerns in logging, events, and in-memory storage. About Us For a customer in the banking sector with financial services requirements, we worked on Informatica Big Data Management, Oozie, Hive, and security compliance frameworks. Contact [dlt] and [slt] for more details.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Noida, Hyderabad, Greater Noida

Work from Office

Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop

Posted 2 months ago

Apply

6.0 - 11.0 years

12 - 22 Lacs

Mangaluru

Work from Office

Key Responsibilities Training & Mentoring Design, structure, and deliver hands-on technical training programs tailored to real-world client projects. Develop and maintain training materials, assignments, and project-based learning paths for: .NET technologies (C#, ASP.NET MVC, Razor Pages, Blazor, REST APIs) ReactJS (SPA fundamentals and advanced features) Python with Machine Learning foundations Data infrastructure tools like HDFS, Kafka, RabbitMQ, MQTT Embedded C and DLMS protocols Mentor and guide new hires and junior developers through technical problem-solving, code reviews, and best practices. Conduct code walkthroughs, mock evaluations, and project reviews. Evaluate trainee performance and recommend improvement strategies. Product Development Engagement Contribute to ongoing product development projects to maintain domain relevance and technical sharpness. Collaborate with development teams to understand evolving tech stacks and integrate those into training programs. Assist in architectural discussions, system design walkthroughs, and POCs that benefit both internal teams and external clients. Program Planning & Execution Align training programs with specific client domains and project requirements . Own the delivery of end-to-end bootcamps and skill upskilling programs. Integrate real-world case studies, code assignments, and project-based tasks into training modules. Act as a knowledge bridge between development and training teams. Technical Skill Set .NET Stack : C#, ASP.NET MVC, Web API, Razor Pages, Blazor Frontend : ReactJS Component model, state management, Hooks Database : MSSQL and PostgreSQL schema design, stored procedures, transactions Python & ML : Numpy, Pandas, scikit-learn, supervised/unsupervised learning Big Data & Infra : HDFS (CLI, scripting, file ops), Linux (bash, awk/sed), Scala basics Messaging Systems : Kafka, RabbitMQ, MQTT – publishing, subscribing, integration with backend Embedded Systems : Embedded C, DLMS protocol understanding and implementation Good to Have Experience with containerization (Docker), Git CI/CD flows Exposure to enterprise-level architecture and scalable solutions Experience mentoring graduates from premier institutes

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Bengaluru

Work from Office

We're Hiring | Big Data Engineer (6-8 Yrs) | Bangalore | Hybrid (3 days/week office) Location: Bangalore (Hybrid 3 days/week in office) Experience: 6 to 8 years Joiners: Immediate to Max 2 Weeks Notice Period ONLY Must-Have Skills: Strong hands-on experience in Hadoop ecosystem , HDFS , Hive , Spark with Scala Familiarity with Oozie , ScalaTest Expertise in performance tuning and debugging using Spark UI and YARN logs Good understanding of CI/CD processes , Unit Testing , GitHub , Maven , and Nexus To Apply: Send your resume to vijay.s@xebia.com with the following details: Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Location: Notice Period / Last Working Day (if serving): Primary Skill Set: LinkedIn URL: Note: Please apply only if : You are an immediate joiner or with a max of 2 weeks' notice You havent applied to Xebia recently or are not currently in process Know someone who fits this role? Share this with them! #Hiring #BigDataJobs #SparkScala #HadoopJobs #XebiaHiring #ImmediateJoiners #BangaloreJobs #HybridRoles #DataEngineeringJobs #JoinUs #TechJobs

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies