Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
bangalore, karnataka
On-site
Role Overview: You will be responsible for architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions. Your role will involve designing scalable data architectures with Snowflake, integrating cloud technologies such as AWS, Azure, GCP, and ETL/ELT tools like DBT. Additionally, you will guide teams in proper data modeling, transformation, security, and performance optimization. Key Responsibilities: - Architect and deliver highly scalable, distributed, cloud-based enterprise data solutions - Design scalable data architectures with Snowflake and integrate cloud technologies like AWS, Azure, GCP, and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Load data from disparate data sets and translate complex functional and technical requirements into detailed design - Deploy Snowflake features such as data sharing, events, and lake-house patterns - Implement data security and data access controls and design - Understand relational and NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Utilize AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Implement Lambda and Kappa Architectures - Utilize Big Data frameworks and related technologies, with mandatory experience in Hadoop and Spark - Utilize AWS compute services like AWS EMR, Glue, and Sagemaker, as well as storage services like S3, Redshift, and DynamoDB - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshoot and perform performance tuning in Spark framework - Spark core, SQL, and Spark Streaming - Experience with flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Qualifications Required: - 8-12 years of relevant experience - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python/Java - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Proficiency in AWS, Data bricks, and Snowflake data warehousing, including SQL, Snow pipe - Experience in data security, data access controls, and design - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Good knowledge of Big Data frameworks and related technologies, with mandatory experience in Hadoop and Spark - Good experience with AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, SQL, and Spark Streaming - Experience in one of the flow tools like Airflow, Nifi, or Luigi - Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Kindly share your profiles on dhamma.b.bhawsagar@pwc.com if you are interested in this opportunity.,
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description We are looking for a passionate Data Engineer to join our agile team. You will be #LI-hybrid (Hybrid work schedule) based in Hyderabad and reporting to Director Engineering. You will help build high-quality solutions that meet the highest technical standards and deliver value to our customers. The ideal candidate will have 5+yrs of experience in the software development lifecycle, a strong understanding of business needs, and responsibility for the products and services we deliver. Collaborate with an agile team to develop quality solutions within deadlines. Support and enhance the full product lifecycle through effective collaboration. Review proposals, evaluate alternatives, provide estimates, and make recommendations. Serve as an expert on applications and provide technical support. Revise, update, refactor, and debug both new and existing codebases. Support the development of other team members. Qualifications Degree, HND, or HNC in a software development discipline, or equivalent commercial experience in developing applications deployed on AWS. Minimum 5+yrs of experience Expertise with the full development lifecycle. Able to explain solutions to both technical and non-technical audiences. Ability to write clean, scalable code, with a focus on design patterns and best practices. Proficiency with Application Lifecycle Management Tools (e.g., GIT, Jira, Confluence). Familiarity with CI/CD pipeline tools. Expertise in Scala development & with Spark Framework Familiarity with AWS technologies. Commitment to staying updated with the latest terminology, concepts, and best practices. Desired Skills: Understanding of Agile methodologies. AWS Developer Certification. Expertise in Python, Openshift/Kubernetes Proficiency with automated testing tools (e.g., Scalatest). Knowledge of RESTful and microservice architectures. Expertise in Terraform or CloudFormation. Familiarity with SonarQube and Veracode. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning World's Best Workplaces 2024 (Fortune Global Top 25), Great Place To Work in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site and Glassdoor to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Experian Careers - Creating a better tomorrow together
Posted 2 weeks ago
0.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Sr.Associate Director, Software Engineering Provide the technical expertise for Risk Data Platform and the various software components that supplement it for its transformation & uplift. Implement standards around development, DevSecOps, orchestration, segregation & containerization Act as a technical expert on the design and implementation of the technology solutions to meet the needs of the Data & Enterprise reporting function on a tactical and strategic basis Accountable for ensuring compliance of the products and services with mandatory and regulatory requirements, control objectives in the risk and control framework and technical currency (in line with published standards and guidelines) and, with the architecture function, implementation of the business imperatives. The role holder must work with the IT communities of practice to maximize automation, increase efficiency and ensure that best practice, and the latest tools, techniques and processes have been adopted Requirements To be successful in this role, you should meet the following requirements: Must have experience in CI/CD - Ansible / Jenkins Must have experience in operating a container orchestration cluster (Kubernetes, Docker) Proficient knowledge of integration of Spark framework & Deltalake Must have knowledge on working on distribute compute platform like - Spark/Hadoop/Trino etc Must have experience in Python/Pyspark Must have knowledge on code review, code optimization and enforcing best in class coding standards Must have experience in multi-tenant application/platform Must have knowledge on access management, segregation of duties, and change management process DevSec Ops Preferred knowledge on Apache eco system like - Spark, Airflow Preferred Experience in any database (Postgres) Experience with UNIX & Spark UI Experience with zookeeper or similar orchestration. Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible) Significant experience with Linux operating system environments. Proficient understanding of code versioning tools Git. Understanding of accessibility and security compliance. Knowledge of user authentication and authorization between multiple systems, servers, and environments Strong unit test , integration test and debugging skill Excellent problem-solving, Log Analysis and troubleshooting skills Experience with infrastructure scripting solutions such as Python/Shell scripting. Excellent problem-solving, Log Analysis and troubleshooting skills using SPLUNK & Acceldata Experience in scheduling tool - ControlM Experience in log monitoring tool - SPLUNK Experience in Vault - HashiCopr Expertise in Python Coding You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India
Posted 2 weeks ago
4.0 - 10.0 years
15 - 20 Lacs
Bengaluru, Karnataka, India
On-site
Greetings from Maneva! Job Description Job Title-PySpark/Scala Developer Location-Bangalore Experience-4- 10 Years Notice -Immediate to 30 days Requirements: Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala andPySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Plus. If you are excited to grab this opportunity, please apply directly or share your CV at [HIDDEN TEXT] and [HIDDEN TEXT]
Posted 1 month ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Minimum Qualifications Education: Bachelors or Masters degree in Computer Science, Information Systems or related field. Experience 8+ years of experience with data analytics, data modeling, and database design. 3+ years of coding and scripting (Python, Java, Scala) and design experience. 3+ years of experience with Spark framework. 5+ Experience with ELT methodologies and tools. 5+ years mastery in designing, developing, tuning and troubleshooting SQL. Knowledge of Informatica Power center and Informatica IDMC. Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. Strong data analysis skills for extracting insights from financial data Proficiency in reporting tools (e.g., Power BI, Tableau). Show more Show less
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Pyspark Developer at Viraaj HR Solutions, you will be responsible for developing and maintaining scalable Pyspark applications for data processing. Your role will involve collaborating with data engineers to design and implement ETL pipelines for large datasets. Additionally, you will perform data analysis and build data models using Pyspark to derive insights. It will be your responsibility to ensure data quality and integrity by implementing data cleansing routines and leveraging SQL to query databases effectively. You will also create comprehensive data reports and visualizations for stakeholders, optimize existing data processing jobs for performance and efficiency, and implement new features and enhancements as required by project specifications. Participation in code reviews to ensure adherence to best practices, troubleshooting technical issues with team members, and maintaining documentation of data processes and system configurations will be part of your daily tasks. To excel in this role, you should possess a Bachelor's degree in Computer Science, Information Technology, or a related field, along with proven experience as a Pyspark Developer or in a similar role. Strong programming skills in Pyspark and Python, a solid understanding of the Spark framework and its APIs, and proficiency in SQL for managing and querying databases are essential qualifications. Experience with ETL tools and processes, knowledge of data visualization techniques and tools, and familiarity with cloud platforms such as AWS and Azure are also required. Your problem-solving and analytical skills, along with excellent communication skills (both verbal and written), will be crucial for success in this role. You should be able to work effectively in a team environment, adapt to new technologies and methodologies, and have experience in Agile and Scrum methodologies. Prior experience in data processing on large datasets and an understanding of data governance and compliance standards will be beneficial. Key Skills: agile methodologies, data analysis, team collaboration, Python, Scrum, Pyspark, data visualization, problem-solving, ETL tools, Python scripting, Apache Spark, Spark framework, cloud platforms (AWS, Azure), SQL, cloud technologies, data processing.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
delhi
On-site
The client, a leading MNC, specializes in technology consulting and digital solutions for global enterprises. With a vast workforce of over 145,000 professionals across 90+ countries, they cater to 1100+ clients in various industries. The company offers a comprehensive range of services including consulting, IT solutions, enterprise applications, business processes, engineering, network services, customer experience, AI & analytics, and cloud infrastructure services. Notably, they have been recognized for their commitment to sustainability with the Terra Carta Seal, showcasing their dedication to building a climate and nature-positive future. As a Data Engineer with a minimum of 6 years of experience, you will be responsible for constructing and managing data pipelines. The ideal candidate should possess expertise in Databricks, AWS/Azure, and data storage technologies such as databases and distributed file systems. Familiarity with the Spark framework is essential, and prior experience in the retail sector would be advantageous. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines for processing large data volumes from diverse sources. - Implement and oversee data integration solutions utilizing tools like Databricks, Snowflake, and other relevant technologies. - Develop and optimize data models and schemas to support analytical and reporting requirements. - Write efficient and sustainable Python code for data processing and transformations. - Utilize Apache Spark for distributed data processing and large-scale analytics. - Translate business needs into technical solutions. - Ensure data quality and integrity through rigorous unit testing. - Collaborate with cross-functional teams to integrate data pipelines with other systems. Technical Requirements: - Proficiency in Databricks for data integration and processing. - Experience with ETL tools and processes. - Strong Python programming skills with Apache Spark, emphasizing data processing and automation. - Solid SQL skills and familiarity with relational databases. - Understanding of data warehousing concepts and best practices. - Exposure to cloud platforms such as AWS and Azure. - Hands-on troubleshooting ability and problem-solving skills for complex data issues. - Practical experience with Snowflake.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You should have 6-8 years of experience with a deep understanding of the Spark framework, along with hands-on experience in Spark SQL and Pyspark. Your expertise should include Python programming and familiarity with common Python libraries. Strong analytical skills are essential, especially in database management, including writing complex queries, query optimization, debugging, user-defined functions, views, and indexes. Your problem-solving abilities will be crucial in designing, implementing, and maintaining efficient data models and pipelines. Experience with Big Data technologies is a must, while familiarity with any ETL tool would be advantageous. As part of your responsibilities, you will be working on projects to deliver, review, and design PySpark and Spark SQL-based data engineering analytics solutions. Your tasks will involve writing clean, efficient, reusable, testable, and scalable Python logic for analytical solutions. Emphasis will be on building solutions for data cleaning, data scraping, and exploratory data analysis, ensuring compatibility with any BI tool. Collaboration with Data Analysts/BI developers to provide clean and processed data will be essential. You will design data processing pipelines using ETL techniques, develop and deliver complex requirements to achieve business goals, and work with unstructured, structured, and semi-structured data and their respective databases. Effective coordination with internal engineering and development teams to understand requirements and develop solutions is critical. Communication with stakeholders to grasp business logic and provide optimal data engineering solutions will also be part of your role. It is important to adhere to best coding practices and standards throughout your work.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 3-5 years in the field, you will be responsible for handling various technical tasks related to Azure Data Factory, Talend/SISS, MSSQL, Azure, and MySQL. Your expertise in Azure Data Factory will be crucial in this role. Your primary responsibilities will include demonstrating advanced knowledge of Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. Your ability to analyze and comprehend complex data sets will play a key role in your daily tasks. Proficiency in Azure Data Lake and other Azure services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD will be essential for success in this role. Additionally, a solid understanding of master data management, data warehousing, and business intelligence architecture will be required. You will be expected to have experience in data modeling and database design, with a strong grasp of SQL Server best practices. Effective communication skills, both verbal and written, will be necessary for interacting with stakeholders at all levels. A clear understanding of the data warehouse lifecycle will be beneficial, as you will be involved in preparing design documents, unit test plans, and code review reports. Experience working in an Agile environment, particularly with methodologies like Scrum, Lean, or Kanban, will be advantageous. Knowledge of big data technologies such as the Spark Framework, NoSQL, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) would be a valuable asset in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
NTT DATA is seeking a Java Technical Consultant to join their team in Bangalore, Karnataka (IN-KA), India. As a Java Technical Consultant, you will be responsible for demonstrating proficiency in Java, including a solid understanding of its ecosystems. You will also be expected to have sound knowledge of Object-Oriented Programming (OOP) Patterns and Concepts, familiarity with different design and architectural patterns, and the ability to write reusable Java libraries. Additionally, you should possess expertise in Java concurrency patterns, a basic understanding of the concepts of MVC (Model-View-Controller) Pattern, JDBC (Java Database Connectivity), and RESTful web services. Experience in working with popular web application frameworks like Play and Spark is preferred, as well as relevant knowledge of Java GUI frameworks like Swing, SWT, AWT according to project requirements. The ideal candidate will have the ability to write clean, readable Java code, basic knowhow of class loading mechanism in Java, experience in handling external and embedded databases, and understanding basic design principles behind scalable applications. You should also be skilled at creating database schemas that characterize and support business processes, knowledgeable about JVM (Java Virtual Machine) and its drawbacks, weaknesses, and workarounds, and proficient in implementing automated testing platforms and unit tests. Moreover, you are expected to have in-depth knowledge of code versioning tools like Git, understanding of building tools such as Ant, Maven, Gradle, etc, expertise in continuous integration, and familiarity with JavaServer pages (JSP) and servlets, web frameworks like Struts and Spring, service-oriented architecture, web technologies like HTML, JavaScript, CSS, JQuery, and markup languages such as XML, JSON. Other required skills for this role include knowledge of abstract classes and interfaces, constructors, lists, maps, sets, file IO and serialization, exceptions, generics, Java Keywords like static, volatile, synchronized, transient, etc, multithreading, and synchronization. Banking experience is a must for this position. NTT DATA is a global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success and is one of the leading providers of digital and AI infrastructure worldwide. Visit us at us.nttdata.com.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 3-5 years of experience, you will be responsible for working with a range of technical skills including Azure Data Factory, Talend/SSIS, MSSQL, Azure, and MySQL. Your primary focus will be on Azure Data Factory, where you will utilize your expertise to handle complex data analysis tasks effectively. In this role, you will demonstrate advanced knowledge in Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. It is essential that you possess a solid understanding of Azure Data Lake and Azure Services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD processes. Furthermore, your responsibilities will include mastering data management, data warehousing, and business intelligence architecture. You will be required to apply your experience in data modeling and database design, ensuring compliance with SQL Server best practices. Effective communication is key in this role, as you will engage with stakeholders at various levels. You will contribute to the preparation of design documents, unit test plans, and code review reports. Experience in an Agile environment, specifically with Scrum, Lean, or Kanban methodologies, will be advantageous. Additionally, familiarity with Big Data technologies such as the Spark Framework, NoSQL databases, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) will be beneficial for this position.,
Posted 2 months ago
6.0 - 7.0 years
6 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities : Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
5.0 - 12.0 years
5 - 6 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Description We are seeking an experienced AWS Glue Engineer to join our team in India. The ideal candidate will have a strong background in ETL processes and AWS services, with the ability to design and implement efficient data pipelines. Responsibilities Design, develop, and maintain ETL processes using AWS Glue. Collaborate with data architects and data scientists to optimize data pipelines. Implement data transformation processes to ensure data integrity and accessibility. Monitor and troubleshoot ETL jobs to ensure performance and reliability. Work with AWS services such as S3, Redshift, and RDS to support data workflows. Skills and Qualifications 5-12 years of experience in data engineering or ETL development. Strong proficiency in AWS Glue and AWS ecosystem services. Experience with Python or Scala for scripting and data transformation. Knowledge of data modeling and database design principles. Familiarity with data warehousing concepts and tools. Understanding of data governance and security best practices. Experience with version control systems like Git.
Posted 3 months ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3, Redshift, and EMR for data storage and distributed processing. AWS Lambda, AWS Step Functions, and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |