Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Data Software Engineer with 5 to 12 years of experience in Big Data and related technologies. The ideal candidate will have expertise in distributed computing principles, Apache Spark, and hands-on programming with Python. Roles and Responsibility Design and implement Big Data solutions using Apache Spark and other relevant technologies. Develop and maintain large-scale data processing systems, including stream-processing systems. Collaborate with cross-functional teams to integrate data from multiple sources, such as RDBMS, ERP, and files. Optimize performance of Spark jobs and troubleshoot issues. Lead a team efficiently and contribute to the development of Big Data solutions. Experience with native Cloud data services, such as AWS or AZURE Databricks. Job Expert-level understanding of distributed computing principles and Apache Spark. Hands-on programming experience with Python and proficiency with Hadoop v2, Map Reduce, HDFS, and Sqoop. Experience with building stream-processing systems using technologies like Apache Storm or Spark-Streaming. Good understanding of Big Data querying tools, such as Hive and Impala. Knowledge of ETL techniques and frameworks, along with experience with NoSQL databases like HBase, Cassandra, and MongoDB. Ability to work in an AGILE environment and lead a team efficiently. Strong understanding of SQL queries, joins, stored procedures, and relational schemas. Experience with integrating data from multiple sources, including RDBMS (SQL Server, Oracle), ERP, and files.
Posted 1 month ago
4.0 - 9.0 years
9 - 13 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Krazy Mantra Group of Companies is looking for Big Data Engineer to join our dynamic team and embark on a rewarding career journeyDesigning and implementing scalable data storage solutions, such as Hadoop and NoSQL databases.Developing and maintaining big data processing pipelines using tools such as Apache Spark and Apache Storm.Writing and testing data processing scripts using languages such as Python and Scala.Integrating big data solutions with other IT systems and data sources.Collaborating with data scientists and business stakeholders to understand data requirements and identify opportunities for data-driven decision making.Ensuring the security and privacy of sensitive data.Monitoring performance and optimizing big data systems to ensure they meet performance and availability requirements.Staying up-to-date with emerging technologies and trends in big data and data engineering.Mentoring junior team members and providing technical guidance as needed.Documenting and communicating technical designs, solutions, and best practices.Strong problem-solving and debugging skillsExcellent written and verbal communication skills
Posted 1 month ago
2.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Join us as a MI Reporting Engineer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions To be successful as a MI Reporting Engineer you should have experience with: Hands on experience in developing complex/medium/easy reports in Tableau, QlikView & SAP BO reports Comfortable with Extracting, transforming and loading data from multiple sources such as Teradata and Hive into BI tools Experience in Snowflake / AWS Quicksight preferrable Create performance efficient data models and dashboards Solid working knowledge of writing SQL queries in Teradata and Hive/Impala Experience in writing PySpark queries and exposure to AWS Athena Attention to details with strong analytical and problem solving skills Exceptional communication and interpersonal skills Comfortable working in a corporate environment, someone who has business acumen and an innovative mind-set Some Other Highly Valued Skills Includes High level understanding of ETL processes Banking domain experience Quantitative mind set, with a desire to work in a data-intensive environment Familiarity with Agile delivery methodologies and project management techniques You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills This role is based out of Pune Purpose of the role To design and develop compelling visualizations that effectively communicate data insights to stakeholders across the bank, influencing decision-making and improving business outcomes Accountabilities Performing exploratory data analysis and data cleansing to prepare data for visualization Translation of complex data into clear, concise, and visually appealing charts, graphs, maps, and other data storytelling formats Utilisation of best practices in data visualization principles and design aesthetics to ensure clarity, accuracy, and accessibility Documentation of visualization methodologies and findings in clear and concise reports Presentation of data insights and visualizations to stakeholders at all levels, including executives, business users, and data analysts Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard The four LEAD behaviours are: L Listen and be authentic, E Energise and inspire, A Align across the enterprise, D Develop others OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate Will have an impact on the work of related teams within the area Partner with other functions and business areas Takes responsibility for end results of a teams operational processing and activities Escalate breaches of policies / procedure appropriately Take responsibility for embedding new policies/ procedures adopted due to risk mitigation Advise and influence decision making within own area of expertise Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function Make evaluative judgements based on the analysis of factual information, paying attention to detail Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents Guide and persuade team members and communicate complex / sensitive information Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship our moral compass, helping us do what we believe is right They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge and Drive the operating manual for how we behave
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Chennai
Work from Office
We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have expertise in developing and implementing big data solutions using Hadoop technologies. Roles and Responsibility Design, develop, and deploy scalable big data applications using Hadoop. Collaborate with cross-functional teams to identify business requirements and develop solutions. Develop and maintain large-scale data processing systems using Hadoop MapReduce. Troubleshoot and optimize performance issues in existing Hadoop applications. Participate in code reviews to ensure high-quality code standards. Stay updated with the latest trends and technologies in big data development. Job Requirements Strong understanding of Hadoop ecosystem including HDFS, YARN, and Oozie. Experience with programming languages such as Java or Python. Knowledge of database management systems such as MySQL or NoSQL. Familiarity with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About the Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and AutomationStrong scripting skills in Linux.
Posted 1 month ago
9.0 - 14.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Build robust, performing and high scalable, flexible data pipelines with a focus on time to market with quality.Responsibilities: Act as an active team member to ensure high code quality (unit testing, regression tests) delivered in time and within budget. Document the delivered code/solution Participate to the implementation of the releases following the change & release management processes Provide support to the operation team in case of major incidents for which engineering knowledge is required. Participate to effort estimations. Provide solutions (bug fixes) for problem mgt. Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : You have experience with most of these technologiesHDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger. Knowledge of GraphQL, Venafi (Certificate Mgt) and Collibra (Data Governance) is an asset. Experience in a telecommunication environment and real-time technologies with focus on high availability and high-volume processing is an advantage:o Kafkao Flinko Spark Streaming You master programming languages as Java & Python/PySpark as well as SQL and are proficient in UNIX scripting Data formats like JSON, Parquet, XML and REST API have no secrets for you You have experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test. Knowledge of the Azure DevOps toolset is an asset.As project is preparing a “move to Azure”, the above will change slightly in the course of 2025. However, most of our current technological landscape remains a solid foundation for a role as EDH Data Engineer Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 month ago
9.0 - 14.0 years
12 - 16 Lacs
Pune
Work from Office
Skills requiredStrong SQL(minimum 6-7 years experience), Datawarehouse, ETL Data and Client Platform Tech project provides all data related services to internal and external clients of SST business. Ingestion team is responsible for getting and ingesting data into Datalake. This is Global team with development team at Shanghai, Pune, Dublin and Tampa. Ingestion team uses all Big Data technologies like Impala, Hive, Spark and HDFS. Ingestion team uses Cloud technologies such as Snowflake for cloud data storage. Responsibilities: You will gain an understanding of the complex domain model and define the logical and physical data model for the Securities Services business. You will also constantly improve the ingestion, storage and performance processes by analyzing them and possibly automating them wherever possible. You will be responsible for defining standards and best practices for the team in the areas of Code Standards, Unit Testing, Continuous Integration, and Release Management. You will be responsible for improving performance of queries from lake tables views You will be working with a wide variety of stakeholders source systems, business sponsors, product owners, scrum masters, enterprise architects and possess excellent communication skills to articulate challenging technical details to various class of people. You will be working in Agile Scrum and complete all assigned tasks JIRAs as per Sprint timelines and standards. Qualifications 5 8 years of relevant experience in Data Development, ETL and Data Ingestion and Performance optimization. Strong SQL skills are essential experience writing complex queries spanning multiple tables is required. Knowledge of Big Data technologies Impala, Hive, Spark nice to have. Working knowledge of performance tuning of database queries understanding the inner working of the query optimizer, query plans, indexes, partitions etc. Experience in systems analysis and programming of software applications in SQL and other Big Data Query Languages. Working knowledge of data modelling and dimensional modelling tools and techniques. Knowledge of working with high volume data ingestion and high volume historic data processing is required. Exposure to scripting language like shell scripting, python is required. Working knowledge of consulting project management techniques methods Knowledge of working in Agile Scrum Teams and processes. Experience in data quality, data governance, DataOps and latest data management techniques a plus. Education Bachelors degree University degree or equivalent experience
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
: Job TitleESG DataSustainability Business Analyst Corporate TitleAssociate LocationBangalore, India Role Description The Sustainability Data and Technology Program is a bank wide program to deliver a strategic solution for Environmental, Social and Governance data across Deutsche Bank. The Program is part of the Sustainability Strategy Key Deliverable. As a Business Analyst, you will be part of the Data Team. You will be responsible for reviewing business use cases from stakeholders, gathering & documenting requirements, defining high level implementation steps and creating business user stories. You will closely work with the Product Owner and development teams and bring business and functional analysis skills into the development team to ensure that the implementation of requirements aligns with our business needs and technical quality standards. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Working with the business and technology stakeholders to define, agree and socialise requirements for ESG Data Sourcing and Transformation, needed for the Consumer base within the bank. Work with architects and engineers to ensure that both functional and non-functional requirements can be realised in the design and delivery in a way which respects the architecture strategy. Analyse complex datasets to derive insights to support requirement definition by completing the data profiling of vendor data. Define & document business requirements for review by senior stakeholders, in JIRA and other documentation tools such as Confluence, Draw.IO. Defining acceptance criteria with stakeholders and supporting user acceptance testing to ensure quality product delivery, supporting the Defect Management. Responsible for reviewing User Stories along with test cases based on appropriate interpretation of Business Liaising with business teams and development teams in Agile ceremonies such as Product Backlog Refinements to review the User Stories and to prioritise the Product Backlog, to support the requirements in its path to release in production environment. To act as a point of contact for the Development Teams for any business requirement clarifications Provide support to the Functional Analysts within the Development Teams to produce Analysis artifacts Designing & specifying data mapping to transform source system data into a format which can be consumed by other business areas within the bank Supporting the design and conceptualization of new business solution options and articulating identified impacts and risks Monitor, track issues, risks and dependencies on analysis and requirements work Your skills and experience Mandatory Skills 4+ years business analyst experience in the Banking Industry across the full project life cycle, with broad domain knowledge and understanding of core business processes, systems and data flows Experience of specifying ETL processes within Data projects Experience of a large system implementation project across multiple Business Units and across multiple geographies. It is essential that they are aware of the sort of issues that may arise with a central implementation across different locations Strong knowledge of business analysis methods (e.g. best practices in Management and UAT) Demonstrates the maturity and persuasiveness required to engage in business dialogue and support stakeholders Excellent analysis skills and good problem solving skills Ability to communicate and interpret stakeholders needs and requirements An understanding of systems delivery lifecycles and Agile delivery methodologies A good appreciation of systems and data architectures Strong discipline in data reconciliation, data integrity, controls and documentation Understanding of controls around software development to manage business requirements Ability to work in virtual teams and matrixed organizations Good team player, facilitator-negotiator and networker. Able to lead senior managers towards common goals and build consensus across a diverse group Ability to share information, transfer knowledge and expertise to team members Ability to commit to and prioritise work duties and tasks Ability to work in a fast paced environment with competing and ever changing priorities, whilst maintaining a constant focus on delivery Willingness to chip in and cover multiple roles when required such as cover for Project Managers, assisting architecture, performing testing and write ups of meeting minutes Expertise in Microsoft Office applications (Word, Excel, Visio, PowerPoint) Proficient ability to query large datasets (e.g. SQL, Hue, Impala, Python) with a view to test/analyse content and data profiling Desirable Skills In depth understanding of the aspects of ESG reporting Knowledge of ESG data vendors How well support you . . . .
Posted 1 month ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Functional Programming-Scala Technology-Big Data - Data Processing-Spark-SparkSQL
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
8.0 - 13.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Immediate Openings on Big data engineer/Developer _ Pan India_Contract Experience 5+ Years Skills Big data engineer/Developer Location Pan India Notice Period Immediate . Employment Type Contract Working Mode Hybrid Big data engineer/Developer Spark-Scala HQL, Hive Control-m Jinkins Git Technical analysis and up to some extent business analysis (knowledge about banking products, credit cards and its transactions)
Posted 1 month ago
5.0 - 8.0 years
4 - 8 Lacs
Telangana
Work from Office
Education Bachelors degree in Computer Science, Engineering, or a related field. A Masters degree is preferred. Experience Minimum of 4+ years of experience in data engineering or a similar role. Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pnadas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.
Posted 1 month ago
5.0 - 9.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai
Work from Office
Design and implement data architecture and models for Big Data solutions using MapR and Hadoop ecosystems. You will optimize data storage, ensure data scalability, and manage complex data workflows. Expertise in Big Data, Hadoop, and MapR architecture is required for this position.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
Work from Office
Design and implement Big Data solutions using Hadoop and MapR ecosystem. You will work with data processing frameworks like Hive, Pig, and MapReduce to manage and analyze large data sets. Expertise in Hadoop and MapR is required.
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Chennai
Work from Office
Design, implement, and optimize Big Data solutions using Hadoop technologies. You will work on data ingestion, processing, and storage, ensuring efficient data pipelines. Strong expertise in Hadoop, HDFS, and MapReduce is essential for this role.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Develops data processing solutions using Scala and PySpark.
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai
Work from Office
Design and implement big data solutions using Hadoop ecosystem tools like MapR. Develop data models, optimize data storage, and ensure seamless integration of big data technologies into enterprise systems.
Posted 1 month ago
6.0 - 11.0 years
10 - 14 Lacs
Hyderabad, Pune, Chennai
Work from Office
Job type: contract to hire 10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD
Posted 1 month ago
4.0 - 9.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Immediate job opening for # Python+SQL_C2H_Pan India. #Skill:Python+SQL #Job description: Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.
Posted 1 month ago
6.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.
Posted 1 month ago
5.0 - 9.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Project description During the 2008 financial crisis, many big banks failed or faced issues due to liquidity issues. Lack of liquidity can kill any financial institution over the night. That's why it's so critical to constantly monitor liquidity risks and properly maintain collaterals. We are looking for a number of talented developers, who would like to join our team in Pune, which is building liquidity risk and collateral management platform for one of the biggest investment banks over the globe. The platform is a set of front-end tools and back-end engines. Our platform helps the bank to increase efficiency and scalability, reduce operational risk and eliminate the majority of manual interventions in processing margin calls. Responsibilities The candidate will work on development of new functionality for Liqudity Risk platform closely with other teams over the globe. Skills Must have BigData experience (6 years+); Java/python J2EE, Spark, Hive; SQL Databases; UNIX Shell; Strong Experience in Apache Hadoop, Spark, Hive, Impala, Yarn, Talend, Hue; Big Data Reporting, Querying and analysis. Nice to have Spark Calculators based on business logic/rules Basic performance tuning and troubleshooting knowledge Experience with all aspects of the SDLC Experience with complex deployment infrastructures Knowledge in software architecture, design and testing Data flow automation (Apache NiFi, Airflow etc) Understanding of difference between OOP and Functional design approach Understanding of an event driven architecture Spring, Maven, GIT, uDeploy; Other Languages EnglishB2 Upper Intermediate Seniority Senior
Posted 1 month ago
5.0 - 8.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Must have skills Azure DataBricks, Python And Pyspark, Spark. Please find the JD In the mail chain Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on experience in Azure Databricks , Data Factory, Data Lake store/Blob storage, SQL DB Experience in creating Big data Pipelines with Azure components Hands on programing with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with designing and implementing Big data solutions.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France