Jobs
Interviews

921 Sqoop Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skillsPyspark, Spark, Python Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Build robust, performing and high scalable, flexible data pipelines with a focus on time to market with quality.Responsibilities: Act as an active team member to ensure high code quality (unit testing, regression tests) delivered in time and within budget. Document the delivered code/solution Participate to the implementation of the releases following the change & release management processes Provide support to the operation team in case of major incidents for which engineering knowledge is required. Participate to effort estimations. Provide solutions (bug fixes) for problem mgt. Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : You have experience with most of these technologiesPyspark, AWS, Databricks, Spark, HDFS, Python Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional : Primary skills:Technology-Big Data - Data Processing-Spark Preferred Skills: Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skillsTechnology-Big Data - Data Processing-Map Reduce Preferred Skills: Technology-Big Data - Data Processing-Map Reduce

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BCA,BTech,MTech,MSc,MCA Service Line Strategic Technology Group Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Functional Programming-Scala - Bigdata Preferred Skills: Technology-Functional Programming-Scala

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 9 Lacs

Pune

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skillsHadoop, Hive, HDFS Preferred Skills: Technology-Big Data - Hadoop-Hadoop

Posted 1 month ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BCA,BSc,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities "1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory)2. Good knowledge in SQL, PySpark.3. Should have knowledge in Medallion architecture pattern4. Knowledge on Integration Runtime5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc)6. Should have knowledge of AAS, Cubes.7. To create, manage and optimize the Cube processing.8. Good Communication Skills.9. Experience in leading a team" Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurgaon

Remote

Job description About this role Data Engineer – Client Experience Platform When BlackRock was started in 1988, its founders envisioned a company that combined the best of financial services with cutting edge technology. They imagined a business that would provide financial services to clients as well as technology services to other financial firms. The result of their vision is Aladdin, our industry leading, end-to-end investment management platform. With assets valued over USD $10 trillion managed on Aladdin, our technology empowers millions of investors to save for retirement, pay for college, buy a home and improve their financial wellbeing. What differentiates us at Aladdin is that data is central to our operations. The capability to consume, store, analyze, and derive insights from data is now a vital aspect of what makes us successful. The Client Experience Platform team is focused on developing integrated experiences across desktop, web, and mobile channels for clients, regulators, engineers, sales, and service professionals. The mission of this team is to create the best in-class experience across the horizontal client journey. We are looking for talented Software Engineers that will architect and build the Client Data Platform on the Cloud which will manage and house a 360 degree view of our CRM, Sales and Service echo system. The candidate will focus on building scalable data ingestion pipelines and conforming and transforming the data to support analytical use cases consisting of data of the highest quality for all users of the platform, notably Global Client Business, US Wealth Advisory, Executive Teams and Data Scientists. Software engineers at BlackRock get to experience working at one of the most recognized financial companies in the world while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. Description : As Data Engineer, you will… Improve BlackRock’s ability to enhance our retail sales distribution capabilities and services suite by creating, expanding and optimizing our data and data pipeline architecture. You will create and operationalize data pipelines to enable squads to deliver high quality data-driven product. You will be accountable for managing high-quality datasets exposed for internal and external consumption by downstream users and applications. Top technical / programming skills – Python, Java or Scala, Hadoop Suite, Cloud Data Platforms Preferably Snowflake and SQL. Experience working with flat files (e.g., csv, tsv, Excel), Database API sources is a must to both ingest and create transformations. Given the highly execution-focused nature of the work, the ideal candidate will roll up their sleeves to ensure that their projects meet deadlines and will always look for ways to optimize processes in future cycles. The successful candidate will be highly motivated to create, optimize, or redesign data pipelines to support our next generation of products and data initiatives. You will be a builder and an owner of your work product. Responsibilities: Lead in the creation and maintenance of optimized data pipeline architectures on large and complex data sets. Assemble large, complex data sets that meet business requirements. Act as lead to identify, design, and implement internal process improvements and relay to relevant technology organization. Support customers to assist in data-related technical issues and support their data infrastructure needs. Automate manual ingest processes and optimize data delivery subject to service level agreements; work with infrastructure on re-design for greater scalability. Keep data separated and segregated according to relevant data policies. Work with data scientists to develop data ready tools to support their job. Be up-to-date with the latest tech trends in the big-data space and recommend them as needed. Identify, investigate, and resolve data discrepancies by finding the root cause of issues; collaborate with partners across various multi-functional teams to prevent future occurrences. Qualifications: Overall 6+ years of hands-on experience in computer/software engineering with majority in big data engineering. 5+ years of strong Python or Scala programming skills (Core Python and PySpark) including hands-on experience creating and supporting UDFs and modules like pytest. 5+ years of experience with building and optimizing ‘big data’ pipelines, architectures, and data sets. Familiarity with data pipeline and workflow management tools (e.g., Airflow, DBT Kafka). 5+ years of hands-on experience on developing on Spark in a production environment. Expertise on parallel execution, deciding resources and different modes of executing jobs is required. 5+ years of experience using Hive (on Spark), Yarn (logs, DAG flow diagrams), Sqoop. Proficiency bucketing, partitioning, tuning and handling different file formats (ORC, PARQUET & AVRO). 5+ years of experience using Transact SQL (e.g., MS SQ Server, MySQL), No-SQL and GraphQL. Strong experience implementing solutions on Snowflake Experience in deployment, maintenance, and administration tasks related to Cloud (Azure Preferred), OpenStack, Docker, Kafka and Kubernetes Experience with working with global teams across different time zones Plus - Experience with Machine Learning and Artificial Intelligence Plus - Experience with Generative Artificial Intelligence Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R255355

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 5 Lacs

Kochi, Hyderabad, Thiruvananthapuram

Work from Office

Key Responsibilities Develop & Deliver: Build applications/features/components as per design specifications, ensuring high-quality code adhering to coding standards and project timelines. Testing & Debugging: Write, review, and execute unit test cases; debug code; validate results with users; and support defect analysis and mitigation. Technical Decision Making: Select optimal technical solutions including reuse or creation of components to enhance efficiency, cost-effectiveness, and quality. Documentation & Configuration: Create and review design documents, templates, checklists, and configuration management plans; ensure team compliance. Domain Expertise: Understand customer business domain deeply to advise developers and identify opportunities for value addition; obtain relevant certifications. Project & Release Management: Manage delivery of modules/user stories, estimate efforts, coordinate releases, and ensure adherence to engineering processes and timelines. Team Leadership: Set goals (FAST), provide feedback, mentor team members, maintain motivation, and manage people-related issues effectively. Customer Interaction: Clarify requirements, present design options, conduct demos, and build customer confidence through timely and quality deliverables. Technology Stack: Expertise in Big Data technologies (PySpark, Scala), plus preferred skills in AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB), CICD tools (Jenkins), relational & NoSQL databases, microservices, and containerization (Docker, Kubernetes). Soft Skills & Collaboration: Communicate clearly, work under pressure, handle dependencies and risks, collaborate with cross-functional teams, and proactively seek/offers help. Required Skills Big Data,Pyspark,Scala Additional Comments: Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About This Role Data Engineer – Client Experience Platform When BlackRock was started in 1988, its founders envisioned a company that combined the best of financial services with cutting edge technology. They imagined a business that would provide financial services to clients as well as technology services to other financial firms. The result of their vision is Aladdin, our industry leading, end-to-end investment management platform. With assets valued over USD $10 trillion managed on Aladdin, our technology empowers millions of investors to save for retirement, pay for college, buy a home and improve their financial wellbeing. What differentiates us at Aladdin is that data is central to our operations. The capability to consume, store, analyze, and derive insights from data is now a vital aspect of what makes us successful. The Client Experience Platform team is focused on developing integrated experiences across desktop, web, and mobile channels for clients, regulators, engineers, sales, and service professionals. The mission of this team is to create the best in-class experience across the horizontal client journey. We are looking for talented Software Engineers that will architect and build the Client Data Platform on the Cloud which will manage and house a 360 degree view of our CRM, Sales and Service echo system. The candidate will focus on building scalable data ingestion pipelines and conforming and transforming the data to support analytical use cases consisting of data of the highest quality for all users of the platform, notably Global Client Business, US Wealth Advisory, Executive Teams and Data Scientists. Software engineers at BlackRock get to experience working at one of the most recognized financial companies in the world while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. Description As Data Engineer, you will… Improve BlackRock’s ability to enhance our retail sales distribution capabilities and services suite by creating, expanding and optimizing our data and data pipeline architecture. You will create and operationalize data pipelines to enable squads to deliver high quality data-driven product. You will be accountable for managing high-quality datasets exposed for internal and external consumption by downstream users and applications. Top technical / programming skills – Python, Java or Scala, Hadoop Suite, Cloud Data Platforms Preferably Snowflake and SQL. Experience working with flat files (e.g., csv, tsv, Excel), Database API sources is a must to both ingest and create transformations. Given the highly execution-focused nature of the work, the ideal candidate will roll up their sleeves to ensure that their projects meet deadlines and will always look for ways to optimize processes in future cycles. The successful candidate will be highly motivated to create, optimize, or redesign data pipelines to support our next generation of products and data initiatives. You will be a builder and an owner of your work product. Responsibilities Lead in the creation and maintenance of optimized data pipeline architectures on large and complex data sets. Assemble large, complex data sets that meet business requirements. Act as lead to identify, design, and implement internal process improvements and relay to relevant technology organization. Support customers to assist in data-related technical issues and support their data infrastructure needs. Automate manual ingest processes and optimize data delivery subject to service level agreements; work with infrastructure on re-design for greater scalability. Keep data separated and segregated according to relevant data policies. Work with data scientists to develop data ready tools to support their job. Be up-to-date with the latest tech trends in the big-data space and recommend them as needed. Identify, investigate, and resolve data discrepancies by finding the root cause of issues; collaborate with partners across various multi-functional teams to prevent future occurrences. Qualifications Overall 6+ years of hands-on experience in computer/software engineering with majority in big data engineering. 5+ years of strong Python or Scala programming skills (Core Python and PySpark) including hands-on experience creating and supporting UDFs and modules like pytest. 5+ years of experience with building and optimizing ‘big data’ pipelines, architectures, and data sets. Familiarity with data pipeline and workflow management tools (e.g., Airflow, DBT Kafka). 5+ years of hands-on experience on developing on Spark in a production environment. Expertise on parallel execution, deciding resources and different modes of executing jobs is required. 5+ years of experience using Hive (on Spark), Yarn (logs, DAG flow diagrams), Sqoop. Proficiency bucketing, partitioning, tuning and handling different file formats (ORC, PARQUET & AVRO). 5+ years of experience using Transact SQL (e.g., MS SQ Server, MySQL), No-SQL and GraphQL. Strong experience implementing solutions on Snowflake Experience in deployment, maintenance, and administration tasks related to Cloud (Azure Preferred), OpenStack, Docker, Kafka and Kubernetes Experience with working with global teams across different time zones Plus - Experience with Machine Learning and Artificial Intelligence Plus - Experience with Generative Artificial Intelligence Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 month ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL Postgres, MySQL, MS SQL Server Azure ADF, Synapse Analytics, SQL Server, ADLS G2 AWS Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

7.0 years

25 - 30 Lacs

India

On-site

Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge. 7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc Expertise with a variety of data processing systems, including streaming, event, and batch (Spark, Hadoop/MapReduce) 5+ years proficiency in configuring and deploying applications on Linux-based systems 5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive) Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models Skills:- Apache Hadoop, PySpark, Python, Design patterns, Data Structures and Algorithms

Posted 1 month ago

Apply

4.0 - 8.0 years

11 - 16 Lacs

Mumbai, Chennai, Bengaluru

Work from Office

Your Role Should have extensively worked onMetadata, Rules & Memberlists inHFM. VB Scripting knowledge is mandatory. Understand and communicatethe consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio Your Profile Performing UAT with business on the CR's. Should have a to resolve business about theirHFMqueries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. At Capgemini, you can work oncutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Bengaluru,Chennai,Mumbai,Pune

Posted 1 month ago

Apply

1.0 - 2.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers - on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL - Postgres, MySQL, MS SQL Server Azure - ADF, Synapse Analytics, SQL Server, ADLS G2 AWS - Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure - Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Python (Programming Language) Good to have skills : PySparkMinimum 5 year(s) of experience is required Educational Qualification : Mandatory 15 years Full time qualification Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, and utilizing Python and PySpark for data processing and analysis. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Utilize Python and PySpark for data processing and analysis.- Develop and maintain data pipelines and ETL processes.- Troubleshoot and optimize data platform performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with PySpark.- Experience in developing and maintaining data pipelines and ETL processes.- Strong understanding of data modeling and database design principles.- Experience with data warehousing and big data technologies.- Familiarity with cloud-based data platforms such as AWS or Azure. Additional Information:- The candidate should have a minimum of 5 years of experience in Python (Programming Language).- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Bengaluru office. Qualification Mandatory 15 years Full time qualification

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Noida

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS BigData Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS BigData.- Strong understanding of data processing frameworks such as Apache Hadoop and Apache Spark.- Experience with cloud services and architecture, particularly in AWS environments.- Familiarity with data warehousing solutions and ETL processes.- Ability to design and implement scalable data pipelines. Additional Information:- The candidate should have minimum 5 years of experience in AWS BigData.- This position is based at our Noida office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. Weve done this with zero downtime! ?? Sounds impossible? Well, thats the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection and see failures as opportunities to become better. Weve taken steps to inculcate a strong Founders Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As a Database Engineer II, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we arent building unparalleled tech solutions, you can find us debating the plot points of our favorite books and games or even gossiping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the Role As a Database Engineer II, youll establish and implement the best Nosql Database Engineering practices proactively. Youll have opportunities to work on different Nosql technologies on a large scale. Youll also work closely with other engineering teams and establish seamless collaborations within the organization. Being proficient in emerging technologies and the ability to work successfully with a team is key to success in this role. What you will do Manage, maintain and monitor a multitude of Relational/NoSQL databases clusters, ensuring obligations to SLAs. Manage both in-house and SaaS solutions in the Public cloud (Or 3rd party).Diagnose, mitigate and communicate database-related issues to relevant stakeholders. Design and Implement best practices for planning, provisioning, tuning, upgrading and decommissioning of database clusters. Understand the cost optimization aspects of such tools/softwares and implement cost control mechanisms along with continuous improvement. Advice and support product, engineering and operations teams. Maintain general backup/recovery/DR of data solutions. Work with the engineering and operations team to automate new approaches for scalability, reliability and performance. Perform R&D on new features and for innovative solutions. Participate in on-call rotations. What you will need 5 years+ experience in provisioning & managing Relational/NoSQL databases. Proficiency in two or more: Mysql,PostgreSql, Big Table ,Elastic Search, MongoDB, Redis, ScyllaDB. Proficiency in Python programming language. Experience with deployment orchestration, automation, and security configuration management (Jenkins, Terraform, Ansible). Hands-on experience with Amazon Web Services (AWS)/ Google Cloud Platform (GCP).Comfortable working in Linux/Unix environments. Knowledge of TCP/IP stack, Load balancer, Networking. Proven ability to drive projects to completion. A degree in computer science, software engineering, information technology or related fields will be an advantage.

Posted 1 month ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Pune, Bengaluru

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Architect and Senior Architect - Data Governance Chennai, Bangalore, Hyderabad Who we are Tiger Analytics is a global leader in Data, AI, and Analytics, helping Fortune 500 companies solve their most complex business challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 5000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. In recognition of its exceptional workplace culture, industry impact, and leadership in AI strategy, Tiger Analytics has received multiple prestigious accolades in 2025, including the 3AI Pinnacle Award, India's Best Workplaces (2024-2025), WOW Workplaces of 2025, and the Leading GenAI Service Provider title at the GenAI Conclave 2025. The firm was also celebrated as a High-Performance Culture Curator at Darwin Unboxed 2025 and honored with the Minsky Award for Excellence in AI Strategy Consulting. Job Description As a Data Governance Architect, your work is a combination of hands-on contribution, customer engagement, and technical team management. Overall, youʼll design, architect, deploy, and maintain big data-based data governance solutions. More specifically, this will involve: Technical management across the full life cycle of big data-based data governance projects from requirement gathering and analysis to platform selection, design of the architecture, and deployment. Scaling the solution in a cloud-based infrastructure. Collaborating with business consultants, data scientists, engineers, and developers to develop data solutions. Exploring new technologies for creative business problem-solving Leading and mentoring a team of data governance engineers What do we expect? 10+ years of technical experience with 5+ years in the Hadoop ecosystem and 3+ years in Data Governance Solutions Hands-on experience with Data Governance Solutions with a good understanding of the below Data Catalog Business Glossary Business metadata, technical metadata, operational Metadata Data Quality Data Profiling Data Lineage Expertise And Qualification Hands-on experience with the following technologies: Hadoop ecosystem - HDFS, Hive, Sqoop, Kafka, ELK Stack, etc Spark, Scala, Python, and core/advanced Java Relevant AWS/GCP components required to build big data solutions Good to know: Databricks, Snowflake Familiarity working with: Designing/building large cloud-computing infrastructure solutions (in AWS/GCP) Data lake design and implementation Full life cycle of a Hadoop solution Distributed computing and parallel processing environments HDFS administration, configuration management, monitoring, debugging, and performance tuning You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal- opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encouragepeople to grow the way they desire, packages are among the best in industry. Note: Thedesignation will be commensurate with expertise and experience. Compensation packages are among the bestin the industry. Additional Benefits: Health insurance (self & family),virtual wellness platform, Car Lease Program and knowledge communities.

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for Pyspark, Azure databricks to join our dynamic team and embark on a rewarding career journey Develop and maintain big data pipelines using Pyspark Integrate Azure Databricks for scalable data processing Perform data transformation and optimization tasks Collaborate with analysts and data scientists

Posted 1 month ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Nagpur

Work from Office

Primine Software Private Limited is looking for BigData Engineer to join our dynamic team and embark on a rewarding career journey Develop and maintain big data solutions. Collaborate with data teams and stakeholders. Conduct data analysis and processing. Ensure compliance with big data standards and best practices. Prepare and maintain big data documentation. Stay updated with big data trends and technologies.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Noida, India

Work from Office

Responsibilities & Duties: Collaborate with the Sr. Data Governance Analyst to execute on data quality rules identified in the data catalog Transform business data quality rules into SQL statements and integrate them into the data quality engine Provide on-going support and administration of a customized data quality engine Partner with the Analytics team to implement a data quality scorecard and monitor progress of the companys data quality efforts and adherence to data quality rules, standards and policies Collaborate with data stewards in researching data quality issues, identifying the root cause, understanding the business impact, and recommending corrective actions Participate in data exercises including profiling, mapping, modeling, auditing, testing, etc. as necessary Develop and implement data quality processes and standards and build cross organizational awareness of best practice data quality techniques Facilitate change management, communication, training, and education activities as necessary Strong & Excellent communication Qualification & Key Skills 3+ years working in a data quality role (logistics industry preferred) Strong understanding of data quality best practices and proven experience increasing data quality Technical skills required, including SQL Intellectual curiosity and the ability to easily identify patterns and trends in data Familiarity with data pipelines and data lakehouses (databricks is preferred) Experience designing and/or developing a data quality scorecard (Qlik and Sigma is preferred) Knowledge of modern data quality solutions Strong business and technical acumen with experience across data domains Strong analytical skills, organizational skills, and attention to detail Strong verbal and written communication skills Self-motivated and comfortable with ambiguity Proactively seeks opportunities to broaden and deepen knowledge base and proficiencies Mandatory Competencies Data Science - Data Analyst Database - SQL Data Science - Databricks Data Analysis - Data Analysis Beh - Communication and collaboration

Posted 1 month ago

Apply

6.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are looking for a highly skilled Data Engineer with 6 to 9 years of experience to join our team at BlackBaud, located in [location to be specified]. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems, ensuring scalability, reliability, and performance. Troubleshoot and resolve complex technical issues related to data engineering projects. Participate in code reviews and contribute to the improvement of the overall code quality. Stay up-to-date with industry trends and emerging technologies in data engineering. Job Requirements Strong understanding of data modeling, database design, and data warehousing concepts. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong analytical and problem-solving skills, with attention to detail and ability to work under pressure. Good communication and collaboration skills, with the ability to work effectively in a team environment. Ability to adapt to changing priorities and deadlines in a fast-paced IT Services & Consulting environment.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Key Responsibilities: Lead and manage a backend/distributed systems team and third-party resources Build and optimize Java, MapReduce, Hive, and Spark jobs Work with a wide range of Hadoop tools : HDFS, Pig, Hive, HBase, Sqoop, Flume Implement and manage real-time stream processing using Spark Streaming, Storm Develop dimensional data models and perform advanced SQL tuning Analyze source data integrity and ensure accurate data ingestion Build dashboards and BI solutions using best practices Collaborate with internal teams and vendors to prioritize and deliver data initiatives Deploy, monitor, and audit big data models and workflows Technical Skills: Strong hands-on with Hadoop (Cloudera), Hive, Pig, Impala, Spark Programming in Java, Python, Scala , and scripting (Linux, Ruby, PHP) Experience with NoSQL and SQL databases : Cassandra, Postgres Familiarity with cloud services (Azure preferred) Exposure to machine learning/data science tools is a plus

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Pune, Bengaluru

Work from Office

Job Title - Streamsets ETL Developer, Associate Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, HR, Legal, Compliance, AFC, Audit, Corporate Services etc] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all DWS Corporate Functions globally. We are seeking a highly skilled and motivated ETL developer (individual contributor) to join our integration team. The ETL developer will be responsible for developing, testing and maintaining robust and scalable ETL processes to support our data integration initiatives. This role requires a strong understanding of database, Unix and ETL concepts, excellent SQL skills and experience with ETL tools and databases. Your key responsibilities This role will be primarily responsible for creating good quality software using the standard coding practices. Will get involved with hands-on code development. Thorough testing of developed ETL solutions/pipelines. Do code review of other team members. Take E2E Accountability and ownership of work/projects and work with the right and robust engineering practices. Converting business requirements into technical design Delivery, Deployment, Review, Business interaction and Maintaining environments. Additionally, the role will include other responsibilities, such as: Collaborating across teams Ability to share information, transfer knowledge and expertise to team members Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Work with BA and QA to troubleshoot and resolve the reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Hands-on experience with StreamSets, SQL Server and Unix. Experience of developing and optimizing ETL Pipelines for data ingestion, manipulation and integration. Strong proficiency in SQL, including complex queries, stored procedures, functions. Solid understanding of relational database concepts. Familiarity with data modeling concepts (Conceptual, Logical, Physical) Familiarity with HDFS, Kafka, Microservices, Splunk. Familiarity with cloud-based platforms (e.g. GCP, AWS) Experience with scripting languages (e.g. Bash, Groovy). Excellent knowledge of SQL. Experience of delivering within an agile delivery framework Experience with distributed version control tool (Git, Github, BitBucket). Experience within Jenkins or pipelines based modern CI/CD systems

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies