Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Pune
Work from Office
As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc). Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions
Posted 3 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
3.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Primary Skills: Core Java, Spring Boot, Java2/EE, Microsservices Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None
Posted 3 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Java-Apache-Scala Technology-Functional Programming-Scala Technology-Big Data - Data Processing-Map Reduce Technology-Big Data - Data Processing-Spark
Posted 3 weeks ago
4.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Python+ Lamda+ AWS+Pyspark Preferred Skills: Technology-Big Data - Data Processing-Spark Technology-Machine Learning-Python Technology-Infrastructure Security-Reverse Malware Engineering-PANDA
Posted 3 weeks ago
5.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Big Data - ALL Technology-Big Data - Hadoop-Hadoop Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Java-Apache-Scala Technology-Functional Programming-Scala Technology-IOT Platform-Custom IOT Platform – Big Data Processing Analytics
Posted 3 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Technology-Functional Programming-ScalaTechnology-Java-Apache-Scala Preferred Skills: Technology-Java-Apache-Scala Technology-Functional Programming-Scala
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
Your Role Should have extensively worked on Metadata, Rules & Memberlists in HFM. VB Scripting knowledge is mandatory. Understand and communicate the consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio. Your profile Performing UAT with business on the CR's. Should have a to resolve business about their HFM queries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theon one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can. Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal, yoga challenges, or marathons. At Capgemini, you can work onin tech and engineering with industry leaders or createto overcome societal and environmental challenges. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Location - Hyderabad,Chennai,Mumbai,Bengaluru
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
4.0 - 9.0 years
10 - 12 Lacs
Bengaluru, Doddakannell, Karnataka
Work from Office
We are seeking a highly skilled Data Engineer with expertise in ETL techniques, programming, and big data technologies. The candidate will play a critical role in designing, developing, and maintaining robust data pipelines, ensuring data accuracy, consistency, and accessibility. This role involves collaboration with cross-functional teams to enrich and maintain a central data repository for advanced analytics and machine learning. The ideal candidate should have experience with cloud-based data platforms, data modeling, and data governance processes. Location - Bengaluru,Doddakannell, Karnataka, Sarjapur Road
Posted 3 weeks ago
3.0 - 6.0 years
25 - 30 Lacs
Chennai
Work from Office
Zalaris is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
3.0 - 6.0 years
25 - 30 Lacs
Pune
Work from Office
Diverse Lynx is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
2.0 - 5.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are looking for an experienced Software Engineer with experience in AWS cloud services and Distributed Systems. You should have a foundation in software development and experience using cloud and Big Data platforms to build scalable and efficient solutions. You will be reporting to a Senior Manager. You will WFO 2 days a week from Hyderabad. Responsibilities: Collaborate with teams to understand requirements and design software solutions that use AWS cloud services and Big Data technologies. Develop and maintain scalable and reliable software applications using programming languages such as Java, Python, or Scala. Use AWS services including EC2, S3, Lambda, Athena, RDS, DynamoDB, EMR, and others to build cloud-native applications and microservices. Design and develop data processing pipelines using Big Data frameworks like Hadoop, Spark, Kafka, Presto and Hive. Implement monitoring, logging, and alerting solutions to ensure system reliability, availability, and performance. Collaborate with DevOps teams to automate deployment processes and ensure smooth integration of software components. About Experian Experience and Skills Bachelors degree in Computer Science, Engineering, or a related field. Experience as a Software Engineer, with at least 2- 5 years of experience developing software applications. Proficiency in programming languages such as Java, Python, or Scala, and familiarity with software development methodologies and best practices. Hands-on experience building Big Data applications with AWS cloud services and infrastructure, including compute, storage, networking, and security. Develop custom operators and sensors to extend Airflow functionality and support specific use cases. Develop Batch applications using Spark & Scala or PySpark Familiarity with Big Data frameworks such as Hadoop, Spark, Kafka, and related tools like Hive, HBase, and Presto Additional Information Our uniqueness is that we truly celebrate yours. Experians culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experians strong people first approach is award winning; Great Place To Work in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian Careers - Creating a better tomorrow together Benefits Experian care for employees work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
New Delhi, Ahmedabad, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Location: Remote- Delhi / NCR,Bangalore/Bengaluru, Hyderabad/Secunderabad,Chennai, Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
6.0 - 11.0 years
32 - 40 Lacs
Hyderabad, Pune, Chennai
Hybrid
Data Software Engineer - Spark, Python, (AWS ,Kafka or Azure Databricks or GCP) Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks,GCP. 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.
Posted 3 weeks ago
3.0 - 10.0 years
7 - 11 Lacs
Gurugram
Work from Office
0px> Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5. 00 billion in fiscal 2024. For more information, visit www. amdocs. com In one sentence The DB administrator is handling physical and applicative database administration activities supporting end 2 end life cycle of product from database perspective What will your job look like You will support the following levels: Physical - responsible for the physical and technical oriented aspects e. g. storage, security, networking and more; Application - you will handle all application-related issues (e. g. queries, users, embedded SQL s etc. ) Ensure database resources are sized accurately and a design strategy is developed to make sure that the database is maintained at a healthy size. You will ensure availability and performance of multi database and application environments with very large volumes and sizes. You will perform routine DBA tasks like database maintenance, backups, recovery, table space management, upgrades, etc. You will execute periodic health checks for databases and recommend changes that should be executed in the production environment to ensure efficient performance. You will interact and work with multiple infra and IT teams as part of environment setup, maintenance and support. You will work closely with developers, assist them with database structure design as of business needs (e. g. indexes, constraints, integrity). All you need is. . . Experience 6-10 Years in Oracle database. Experience 3-4 Years in Postgres database. Experience 3-4 Years in Hbase Hadoop database. Oracle Database architecture knowledge and administration Thorough understanding of the Oracle 12C and 19C database features and Upgrade. Extensive hands on interaction with large Database management systems Real Application Cluster (RAC) & ASM/Storage areas Backup and Recovery, RMAN, Dataguard, knowledge of various restore and recovery scenarios. Performance Tuning, Parallel query, Query tuning. Partitioning, Understanding of view , Mview Database Security Golden Gate & Replication Enterprise Manager Installing and maintaining the Oracle database. Why you will love this job: Drive growth by constantly looking for opportunities to bring more value to new and existing customers. Calling upon your superb communication skills and the ability to clearly translate business benefits that our service offerings provide. You will be a key member of a global, dynamic and highly collaborative team with various possibilities for personal and professional development Join us in our expanding organization, with ever growing opportunities for personal growth and one of the highest scores of employee engagement in Amdocs! We provide stellar benefits that range from health insurance to paid time off, sick leave, and parental leave!
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Hyderabad, Greater Noida
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop
Posted 4 weeks ago
4.0 - 9.0 years
7 - 8 Lacs
Bengaluru
Work from Office
Hiring for Snowflake Developer for Bangalore/Chennai The individual must have 4+ years of diversified experience developing applications using java languagewith 1+ years of experience in Big data technologies such as Spark, Kafka/Spark Streaming, HBase, Hive,Oozie, Knox, Hadoop (Hortonworks) and related eco system.Mandatorty skills: Core Java, Spark, Kafka, Hive, SQLRelevant Experience in BIG DataStrong Development ExperienceStrong experience in SPARK SCALA HIVEStrong experience in HIVEExperience in Hadoop DevelopmentExperience in Data loading Tools like SQOOPKnowledge of workflow/ Schedulers like Oozie, AirflowProven understanding with Hadoop, HBase, SQOOPExperience in AWS or Azure Good to have skills Spark, Scala, Oozie, HIVE, Shell script, Jenkins, Ansible, Github, Nifi, Elastic, KIBANA, GRAFANA, Kafka.Exp in KAFKAGood experience in Talend ETL toolExperience in creating and consuming RESTful web services.Knowledge with building stream-processing systems using solutions such as Spark-Streaming, KafkaKnowledge of principles and components of Big Data processing and analytics highly desired.
Posted 4 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Big Data Developer - 2 resource Extensive Experience in Hadoop, Hive, HBase and Spark. Hands-on Development Experience in Java and Spark with Scala Strong Java programming concepts and clear design patterns understanding. Experienced in implementing data munging, transformation and processing solutions using Spark. Note: (a) Location: BANGALORE (Till COVID situation work from where you are) (b) Minimum of 3 to 5 years of work experience across multiple projects
Posted 4 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Mumbai
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 4 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 4 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 4 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough