Jobs
Interviews

11 Sparkstreaming Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

Role Overview: As an AWS Senior Developer at PwC - AC, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend the requirements and take charge of implementing end-to-end Cloud data engineering solutions such as Enterprise Data Lake and Data hub in AWS. Your role will involve showcasing strong proficiency in AWS cloud technology, exceptional planning and organizational skills, and the ability to work as a cloud developer/lead in an agile team to deliver automated cloud solutions. Key Responsibilities: - Proficient in Azure Data Services, Databricks, and Snowflake - Deep understanding of traditional and modern data architecture and processing concepts - Proficient in Azure ADLS, Data Bricks, Data Flows, HDInsight, Azure Analysis services - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines using Python / Java - Building stream-processing systems with solutions like Storm or Spark-Streaming - Designing and implementing scalable ETL/ELT pipelines using Databricks and Apache Spark - Optimizing data workflows and ensuring efficient data processing - Understanding big data use-cases and Lambda/Kappa design patterns - Implementing Big Data solutions using Microsoft Data Platform and Azure Data Services - Exposure to Open-Source technologies such as Apache Spark, Hadoop, NoSQL, Kafka, Solr/Elastic Search - Driving adoption and rollout of Power BI dashboards for finance stakeholders - Well-versed with quality processes and implementation - Experience in Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab - Guiding the assessment and implementation of finance data marketplaces - Good communication and presentation skills Qualifications Required: - BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA - Certification on AWS Architecture desirable - Experience in building stream-processing systems with Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Worked in Offshore / Onsite Engagements - Experience in AWS services like STEP & Lambda - Good Project Management skills with consulting experience in Complex Program Delivery - Good to have knowledge in Cloud - AWS, GCP, Informatica-Cloud, Oracle-Cloud, Cloud DW - Snowflake & DBT Additional Details: Travel Requirements: Travel to client locations may be required as per project requirements. Line of Service: Advisory Horizontal: Technology Consulting Designation: Senior Associate Location: Anywhere in India Apply now if you believe you can contribute to PwC's innovative and collaborative environment, and be a part of a globally recognized firm dedicated to excellence and diversity.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

nagpur, maharashtra

On-site

The position is for a Full Time job with rotational shifts based in Nagpur, Pune, or Bangalore. We are looking to fill 4 positions with candidates who have 5 to 8 years of experience. As an AWS Data Engineer, you will be responsible for leading development activities for the Data engineering team. You will collaborate with other teams such as application management and product delivery, working closely with technical leads, product managers, and support teams. Your role will involve providing guidance to the development, support, and product delivery teams. Additionally, you will lead the implementation of tools and technologies to drive cost-efficient architecture and infrastructure. On the other hand, as an Azure Data Engineer, your responsibilities will include creating and maintaining optimal data pipelines, assembling large, complex data sets that meet business requirements, and identifying opportunities for process improvements and automation. You will develop data tools for analytics and data science teams to optimize product performance and build analytics tools for actionable insights into business metrics. Collaboration with stakeholders from various teams will also be essential to address data-related technical issues and support data infrastructure needs. The ideal candidate for the AWS Data Engineer position should have experience with AWS services like S3, Glue, SNS, SQS, Lambda, Redshift, and RDS. Proficiency in programming, especially in Python, is required, along with strong skills in designing complex SQL queries and optimizing data retrieval. Knowledge of spark, Pyspark, Hadoop, Hive, and spark-Sql is also essential. For the Azure Data Engineer role, candidates should have experience with Azure cloud services and developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce. Familiarity with stream-processing systems such as Spark-Streaming and Strom will be advantageous.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics engineering is on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. You play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will concentrate on designing and building data infrastructure and systems to enable efficient data processing and analysis. Your responsibilities include developing and implementing data pipelines, data integration, and data transformation solutions. As an AWS Architect / Manager at PwC - AC, you will interact with Offshore Manager/Onsite Business Analyst to understand the requirements and will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. Strong experience in AWS cloud technology is required, along with planning and organization skills. You will work as a cloud Architect/lead on an agile team and provide automated cloud solutions, monitoring the systems routinely to ensure that all business goals are met as per the Business requirements. **Position Requirements:** **Must Have:** - Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python / Java - Design scalable data architectures with Snowflake, integrating cloud technologies (AWS, Azure, GCP) and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Experience in load from disparate data sets and translating complex functional and technical requirements into detailed design - Deploying Snowflake features such as data sharing, events, and lake-house patterns - Experience with data security and data access controls and design - Understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Good knowledge of AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Proficient in Lambda and Kappa Architectures - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Knowledge of Big Data frameworks and related technologies with experience in Hadoop and Spark - Strong experience in AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql, and Spark Streaming - Experience in flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules - Understanding of Cloud data migration processes, methods, and project lifecycle - Business/domain knowledge in Financial Services/Healthcare/Consumer Market/Industrial Products/Telecommunication, Media and Technology/Deal advisory along with technical expertise - Experience in leading technical teams, guiding and mentoring team members - Analytical & problem-solving skills - Communication and presentation skills - Understanding of Data Modeling and Data Architecture **Desired Knowledge/Skills:** - Experience in building stream-processing systems using solutions such as Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Certification on AWS Architecture desirable - Worked in Offshore/Onsite Engagements - Experience in AWS services like STEP & Lambda - Project Management skills with consulting experience in Complex Program Delivery **Professional And Educational Background:** BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA **Minimum Years Experience Required:** Candidates with 8-12 years of hands-on experience **Additional Application Instructions:** Add here and change text color to black or remove bullet and section title if not applicable.,

Posted 4 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a part of the data and analytics engineering team at PwC, your focus will be on utilizing advanced technologies and techniques to create robust data solutions for clients. Your role will involve transforming raw data into actionable insights, enabling informed decision-making, and contributing to business growth. Specifically in data engineering at PwC, you will be responsible for designing and constructing data infrastructure and systems that facilitate efficient data processing and analysis. This will include the development and implementation of data pipelines, data integration, and data transformation solutions. At PwC - AC, we are seeking an Azure Manager specializing in Data & AI, with a strong background in managing end-to-end implementations of Azure Databricks within large-scale Data & AI programs. In this role, you will be involved in architecting, designing, and deploying scalable and secure solutions that meet business requirements, encompassing ETL, data integration, and migration. Collaboration with cross-functional, geographically dispersed teams and clients will be key to understanding strategic needs and translating them into effective technology solutions. Your responsibilities will span technical project scoping, delivery planning, team leadership, and ensuring the timely execution of high-quality solutions. Utilizing big data technologies, you will create scalable, fault-tolerant components, engage stakeholders, overcome obstacles, and stay abreast of emerging technologies to enhance client ROI. Candidates applying for this role should possess 8-12 years of hands-on experience and meet the following position requirements: - Proficiency in designing, architecting, and implementing scalable Azure Data Analytics solutions utilizing Azure Databricks. - Expertise in Azure Databricks, including Spark architecture and optimization. - Strong grasp of Azure cloud computing and big data technologies. - Experience in traditional and modern data architecture and processing concepts, encompassing relational databases, data warehousing, big data, NoSQL, and business analytics. - Proficiency in Azure ADLS, Data Databricks, Data Flows, HDInsight, and Azure Analysis services. - Ability to build stream-processing systems using solutions like Storm or Spark-Streaming. - Practical knowledge of designing and building Near-Real Time and Batch Data Pipelines, expertise in SQL and Data modeling within an Agile development process. - Experience in the architecture, design, implementation, and support of complex application architectures. - Hands-on experience implementing Big Data solutions using Microsoft Data Platform and Azure Data Services. - Familiarity with working in a DevOps environment using tools like Chef, Puppet, or Terraform. - Strong analytical and troubleshooting skills, along with proficiency in quality processes and implementation. - Excellent communication skills and business/domain knowledge in Financial Services, Healthcare, Consumer Market, Industrial Products, Telecommunication, Media and Technology, or Deal advisory. - Familiarity with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab. - Good understanding of Data Modeling and Data Architecture. Certification in Data Engineering on Microsoft Azure (DP 200/201/203) is required. Additional Information: - Travel Requirements: Travel to client locations may be necessary based on project needs. - Line of Service: Advisory - Horizontal: Technology Consulting - Designation: Manager - Location: Bangalore, India In addition to the above, the following skills are considered advantageous: - Cloud expertise in AWS, GCP, Informatica-Cloud, Oracle-Cloud. - Knowledge of Cloud DW technologies like Snowflake and Databricks. - Certifications in Azure Databricks. - Familiarity with Open Source technologies such as Apache Spark, Hadoop, NoSQL, Kafka, and Solr/Elastic Search. - Data Engineering skills in Java, Python, Pyspark, and R-Programming. - Data Visualization proficiency in Tableau and Qlik. Education qualifications accepted include BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA.,

Posted 4 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You have a great opportunity to join as a Data Software Engineer with 5-12 years of experience in Big Data & Data related technology. We are looking for candidates with an expert level understanding of distributed computing principles and hands-on experience in Apache Spark along with proficiency in Python. You should also have experience with technologies like Hadoop, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, Hive, Impala, and integration of data from various sources such as RDBMS, ERP, and Files. Additionally, knowledge of NoSQL databases, ETL techniques, SQL queries, joins, stored procedures, relational schemas, and performance tuning of Spark Jobs is required. Moreover, you must have experience with native Cloud data services like AZURE Databricks and the ability to lead a team efficiently. Familiarity with AGILE methodology and designing/implementing Big data solutions would be an added advantage. This full-time position is based in Hyderabad and requires candidates who are available for face-to-face interactions. If you meet these requirements and are passionate about working with cutting-edge technologies in the field of Big Data, we would love to hear from you.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior AWS Data Engineer Cloud Data Platform at Teamware Solutions, a division of Quantum Leap Consulting Pvt. Ltd, located in Bangalore, you will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake and Data hub in AWS. Working onsite in an office environment for 5 days a week, you will collaborate with the Offshore Manager and Onsite Business Analyst to understand the requirements and deliver scalable, distributed, cloud-based enterprise data solutions. You should have a strong background in AWS cloud technology, with 4-8 years of hands-on experience. Proficiency in architecting and delivering highly scalable solutions is a must, along with expertise in Cloud data engineering solutions, Lambda or Kappa Architectures, Data Management concepts, and Data Modelling. You should be proficient in AWS services such as EMR, Glue, S3, Redshift, and DynamoDB, as well as have experience in Big Data frameworks like Hadoop and Spark. Additionally, you must have hands-on experience with AWS compute and storage services, AWS Streaming Services, troubleshooting and performance tuning in Spark framework, and knowledge of Application DevOps tools like Git and CI/CD Frameworks. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, security, key management, data migration processes, and analytical skills is required. Good communication and presentation skills are essential for this role. Desired skills include experience in building stream-processing systems, Big Data ML toolkits, Python, Offshore/Onsite Engagements, flow tools like Airflow, Nifi or Luigi, and AWS services like STEP & Lambda. A professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA is preferred, and an AWS certified Data Engineer certification is recommended. If you are interested in this position and meet the qualifications mentioned above, please send your resume to netra.s@twsol.com.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer at KG Invicta Services Pvt Ltd, you will leverage your 5-12 years of experience in Big Data & Data-related technologies to drive impactful solutions. Your expertise in distributed computing principles and Apache Spark, coupled with hands-on programming skills in Python, will be instrumental in designing and implementing efficient Big Data solutions. You will demonstrate proficiency in a variety of tools and technologies including Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, RabbitMQ, Hive, Impala, and NoSQL databases such as HBase, Cassandra, and MongoDB. Your ability to integrate data from diverse sources like RDBMS, ERP, and files, along with knowledge of ETL techniques and frameworks, will ensure seamless data processing and analysis. Performance tuning of Spark jobs, familiarity with Cloud data services like AWS and Azure Databricks, and the capability to lead a team effectively will be key aspects of your role. Your expertise in SQL queries, joins, stored procedures, and relational schemas will contribute to the optimization of data querying processes. Your experience with AGILE methodology and a deep understanding of Big Data querying tools will enable you to contribute significantly to the development and enhancement of stream-processing systems. You will collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. If you are passionate about leveraging data to drive innovation and possess a strong foundation in Spark, Python, and Cloud technologies, we invite you to join our team as a Data Software Engineer. This is a full-time position with a day shift schedule, and the work location is in person. Category: ML/AI Engineers, Data Scientist, Software Engineer, Data Engineer Expertise: Python (5 Years), AWS (3 Years), Apache Spark (5 Years), PySpark (3 Years), GCP (3 Years), Azure (3 Years), Apache Kafka (3 Years),

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer, you will be responsible for utilizing your 5-12 years of experience in Big Data & Data-related technologies to contribute to the success of projects in Chennai and Coimbatore in a Hybrid work mode. You should possess an expert level understanding of distributed computing principles and a strong knowledge of Apache Spark, with hands-on programming skills in Python. Your role will involve working with technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming to build stream-processing systems. You should have a good grasp of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources including RDBMS, ERP, and Files. Experience with NoSQL databases such as HBase, Cassandra, MongoDB, and knowledge of ETL techniques and frameworks will be essential for this role. You will be tasked with performance tuning of Spark Jobs, working with AZURE Databricks, and leading a team efficiently. Additionally, your expertise in designing and implementing Big Data solutions, along with a strong understanding of SQL queries, joins, stored procedures, and relational schemas will be crucial. As a practitioner of AGILE methodology, you will play a key role in the successful delivery of data-driven projects.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies. Your expertise should include a deep understanding of distributed computing principles and strong knowledge of Apache Spark. Proficiency in Python programming is required, along with experience using technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming for building stream-processing systems. You should have a good understanding of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources such as RDBMS, ERP, and Files. Knowledge of SQL queries, joins, stored procedures, and relational schemas is essential. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is also expected. The role requires performance tuning of Spark Jobs, experience with AZURE Databricks, and the ability to efficiently lead a team. Designing and implementing Big Data solutions, as well as following AGILE methodology, are key aspects of this position.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be part of our Data Engineering Team in Gurgaon, Noida, or Pune, contributing to the development and management of large enterprise data and analytics platforms. Your role will involve collaborating with the data engineering and data science teams to implement scalable data lakes, data ingestion platforms, machine learning analytics platforms, and more. With at least 3 years of industry experience, you will be responsible for creating end-to-end data solutions and optimal data processing pipelines for handling large volumes of diverse data types. Proficiency in Python, including knowledge of design patterns and strong design skills, is essential. You should have expertise in working with PySpark Dataframes, Pandas Dataframes, and developing efficient data manipulation tasks. Experience in building Restful web services, API platforms, SQL, NoSQL databases, and stream-processing systems like Spark-Streaming and Kafka will be crucial. You will collaborate with the data science and infrastructure teams to deploy machine learning solutions in production environments. It would be advantageous if you have experience with testing libraries like pytest, knowledge of Dockers, Kubernetes, model versioning with mlflow, and microservices libraries. Familiarity with machine learning algorithms and libraries is a plus. Our ideal candidate is proactive, independent, and enjoys problem-solving. Continuous learning is encouraged as we are a rapidly growing company. Being a team player and an effective communicator is essential for success in this role.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies