Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 12.0 years
40 - 45 Lacs
Bengaluru
Hybrid
Role & responsibilities Data engineer with architect level experience in ETL, AWS (Glue), Pyspark, Python etc Preferred candidate profile Immediate joiners who can work on Contract basis If you are interested please share your updated CV at pavan.teja@careernet.in
Posted 1 day ago
5.0 - 10.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Hybrid
Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl
Posted 4 days ago
5.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Role description Were looking for a driven, organized team member to support the Digital & Analytics team with support of Talent transformation projects from both a systems and process perspective. The role will primarily provide PMO support. The individual will need to demonstrate strong project management skills, be very collaborative and detail oriented to coordinate meetings, track and update project plans, risks, and decision logs. This individual would also need to create project materials, support design sessions, user acceptance testing, and escalate project or system issues as needed. Work youll do As the Talent PMO Support you will: Support the Digital & Analytics Manager with support of Talent transformation projects. Track and drive closure of action items and open decisions. Schedule follow up calls, take notes, and distribute action items from discussions. Coordinate with Talent process owners and subject matter advisors to manage change requests and risks, actions, and decisions. Coordinate across Talent, Technology, and Consulting teams to track and escalate issues as appropriate. Update the Talent project plan items, resource tracker and the risks, actions and decisions log as needed. Leverage shared project team site and OneNote notebook to ensure structure and access to communications, materials, and documents for all project team members. Support testing, cut-over, training, and service rehearsal testing processes as needed. Collaborate with the Consulting, Technology, and Talent team members to ensure project deliverables move forward. Qualifications: Bachelors degree and 5-7 years of relevant work experience Background and experience with project management support to implement Talent processes, from ideation through deployment phases. Strong written/verbal executive communication and presentation skills; strong listening, facilitation and influencing skills with audiences of all management and leadership levels. Works well in a dynamic, complex, client, and team focused environment with minimal oversight and an agile mindset Excited by prospect of working in a developing, ambiguous, and challenging situation. Proficient Microsoft Office skills (e.g., PowerPoint, Excel, OneNote, Word, Teams)
Posted 5 days ago
6.0 - 11.0 years
8 - 18 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark
Posted 6 days ago
10.0 - 12.0 years
25 - 30 Lacs
Pune, Mumbai (All Areas)
Hybrid
Design and implement state-of-the-art NLP models, including but not limited to text classification, semantic search, sentiment analysis, named entity recognition, and summary generation. conduct data preprocessing, and feature engineering to improve model accuracy and performance. Stay updated with the latest developments in NLP and ML, and integrate cutting-edge techniques into our solutions. collaborate with Cross-Functional Teams: Work closely with data scientists, software engineers, and product managers to align NLP projects with business objectives. deploy models into production environments and monitor their performance to ensure robustness and reliability. maintain comprehensive documentation of processes, models, and experiments, and report findings to stakeholders. implement and deliver high quality software solutions / components for the Credit Risk monitoring platform. leverage his/her expertise to mentor developers; review code and ensure adherence to standards. apply a broad range of software engineering practices, from analyzing user needs and developing new features to automated testing and deployment ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues understand, represent, and advocate for client needs share knowledge and expertise with colleagues , help with hiring, and contribute regularly to our engineering culture and internal communities. Expertise - Bachelor of Engineering or equivalent. Ideally 8-10Yrs years of experience in NLP based applications focused on Banking / Finance sector. Preference for experience in financial data extraction and classification. Interested in learning new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind. Proficiency in programming languages such as Python & Java. Experience with frameworks like TensorFlow, PyTorch, or Keras. In-depth knowledge of NLP techniques and tools, including spaCy, NLTK, and Hugging Face. Experience with data handling and processing tools like Pandas, NumPy, and SQL. Prior experience in agentic AI, LLMs ,prompt engineering and generative AI is a plus. Backend development and microservices using Java Spring Boot, J2EE, REST for implementing projects with high SLA of data availability and data quality. Experience of building cloud ready and migrating applications using Azure and understanding of the Azure Native Cloud services, software design and enterprise integration patterns. Knowledge of SQL and PL/SQL (Oracle) and UNIX, writing queries, packages, working with joins, partitions, looking at execution plans, and tuning queries. A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles Experience in Azure development including Databricks , Azure Services , ADLS etc. Experience using DevOps toolsets like GitLab, Jenkins
Posted 6 days ago
7.0 - 12.0 years
22 - 30 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 7-10 years of experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 1 week ago
5.0 - 10.0 years
30 - 45 Lacs
Hyderabad
Hybrid
Key Skills: Data Engineer, AI (Artificial intelligence), SQL, Python, Java. Roles and Responsibilities: Architect and implement modern, scalable data solutions on cloud platforms, specifically Google Cloud Platform (GCP). Collaborate with cross-functional teams to assess, redesign, and modernize legacy data systems. Design and develop efficient ETL pipelines for data extraction, transformation, and loading to support analytics and ML models. Ensure robust data governance by maintaining high standards of data security, integrity, and compliance with regulatory requirements. Monitor, troubleshoot, and optimize data workflows and pipelines for enhanced system performance and scalability. Provide hands-on technical expertise and guidance across data engineering projects, with a focus on cloud adoption and automation. Work in an agile environment and contribute to continuous delivery and improvement initiatives. Experience Requirements: 5-10 years experience in designing and implementing data engineering solutions in GCP or other leading cloud platforms. Solid understanding of legacy data infrastructure with demonstrated success in modernization and migration projects. Proficiency in programming languages such as Python and Java for building data solutions and automation scripts. Strong SQL skills, with experience in working with both relational (SQL) and non-relational (NoSQL) databases. Familiarity with data warehousing concepts, tools, and practices. Hands-on experience with data integration tools and frameworks. Excellent analytical, problem-solving, and communication skills. Experience working in fast-paced agile environments and collaborating with multi-disciplinary teams. Education: B.tech, M.tech, B.com, M.com, MBA, any PG.
Posted 1 week ago
5.0 - 10.0 years
16 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge
Posted 1 week ago
5.0 - 10.0 years
16 - 31 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics
Posted 1 week ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Pune
Work from Office
Python+ Pyspark (5 to 10 Yrs) joining location Hyderabad and Pune.
Posted 1 week ago
1.0 - 6.0 years
2 - 7 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
Position Name Data Engineer Total Exp: 3-5 Years Notice Period: Immedidate joiner Work Location: Mumbai, Kandivali Work Type: Work from Office Job Description Must have: Must have Data Engineer having 3 to 5 years of experience Must have Should be an individual contributor to deliver the feature/story within given time and expected quality Must have Should be good in Agile process Must have Should be strong in Programming and SQL queries Must have Should be capable to learn new tools and technologies to scale on Data engineering Must have Should have good communication and client interaction. Technical Skills: Must have Data Engineering using Java/Python, Spark/Py-Spark, Big Data(Hadoop, Hive, Yarn, Oozie, etc.,), Cloud warehouse(Snowflake), Cloud services(AWS EMR, S3, Lambda, RDS/Aurora) Must have Unit testing Framework Junit/Mokito/PowerMock Must have Strong experience on SQL queries(MySQL/SQL server/Oracle/Hadoop/snowflake) Must have Source safe - GITHub Must have Project management tool - VSTS Must have Build management tool - Maven / Gradle Must have CI/CD Azure devops Added advantage: Good to have Shell script, Linux commands
Posted 1 week ago
8.0 - 12.0 years
15 - 27 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Hyderabad
Work from Office
Dear all., Greetings of the Day!!! We have opening with one of the top MNC. Experience:5-15 Years Notice Period: Immediate to 45 Days Job Description: Design, build, and maintain data pipelines (ETL/ELT) using BigQuery , Python , and SQL Optimize data flow, automate processes, and scale infrastructure Develop and manage workflows in Airflow/Cloud Composer and Ascend (or similar ETL tools) Implement data quality checks and testing strategies Support CI/CD (DevSecOps) processes, conduct code reviews, and mentor junior engineers Collaborate with QA/business teams and troubleshoot issues across environments. DBT for transformation Collibra for data quality Working with unstructured datasets Strong analytical and SQL expertise Interested candidates revert back with your updated CV to sushma.b@technogenindia.com
Posted 1 week ago
7.0 - 12.0 years
25 - 40 Lacs
Pune
Work from Office
Role : Consultant/Sr. Consultant Mandate Skills : GCP & Hadoop Location : Pune Budget : 7-10 Years - Upto 30 LPA & 10-12 Years - Upto 40 LPA (Non-Negotiable) Interested candidates can share resumes at: "Kashif@d2nsolutions.com"
Posted 1 week ago
7.0 - 12.0 years
25 - 40 Lacs
Pune
Work from Office
Experience as a Data Analyst with GCP & Hadoop is mandatory. Work From Office
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Chennai
Work from Office
5+ Years of experience in ETL development with strong proficiency in Informatica BDM . Hands-on experience with big data platforms like Hadoop, Hive, HDFS, Spark . Proficiency in SQL and working knowledge of Unix/Linux shell scripting. Experience in performance tuning of ETL jobs in a big data environment. Familiarity with data modeling concepts and working with large datasets. Strong problem-solving skills and attention to detail. Experience with job scheduling tools (e.g., Autosys, Control-M) is a plus.
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
Pune
Work from Office
Perydot is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up- to- date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
6.0 - 10.0 years
20 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities As a Senior Data Engineer, you will work to solve some of the organizational data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member and Data Modeling lead as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Engage the clients & understand the business requirements to translate those into data models. Create and maintain a Logical Data Model (LDM) and Physical Data Model (PDM) by applying best practices to provide business insights. Contribute to Data Modeling accelerators Create and maintain the Source to Target Data Mapping document that includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Gather and publish Data Dictionaries. Involve in maintaining data models as well as capturing data models from existing databases and recording descriptive information. Use the Data Modelling tool to create appropriate data models. Contribute to building data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Use version control to maintain versions of data models. Collaborate with Data Engineers to design and develop data extraction and integration code modules. Partner with the data engineers to strategize ingestion logic and consumption patterns. Preferred candidate profile 6+ years of experience in Data space. Decent SQL skills. Significant experience in one or more RDBMS (Oracle, DB2, and SQL Server) Real-time experience working in OLAP & OLTP database models (Dimensional models). Good understanding of Star schema, Snowflake schema, and Data Vault Modelling. Also, on any ETL tool, Data Governance, and Data quality. Eye to analyze data & comfortable with following agile methodology. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP). You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry
Posted 1 week ago
8.0 - 12.0 years
20 - 35 Lacs
Kolkata, Pune, Chennai
Work from Office
Senior data engineer Job description: - Demonstrate hands-on expertise in Ab Initio GDE, Metadata Hub, Co-operating system & Control-Centre. - Must demonstrate high proficiency in SQL. - Develop and implement solutions for metadata management and data quality assurance. - Able to identify, analyze, and resolve technical issues related to the Ab Initio solution. - Perform unit testing and ensure the quality of developed solutions. - Provide Level 3 support and troubleshoot issues with Ab Initio applications deployed in Production - Working knowledge of Azure Databricks & python will be an advantage. - Any past experience working on SAP HANA Data layer would be good to have. Other traits - Proficient communication skill required as she/he will be directly engaging with client teams. - Technical leadership, open to learn and adopt the complex landscape of data technologies in a new environment.
Posted 1 week ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Bengaluru
Work from Office
Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 1 week ago
4.0 - 9.0 years
3 - 8 Lacs
Pune
Work from Office
Design, develop, and maintain ETL pipelines using Informatica PowerCenter or Talend to extract, transform, and load data into EDW systems and data lake. Optimize and troubleshoot complex SQL queries and ETL jobs to ensure efficient data processing and high performance. Technologies - SQL, Informatica Power center, Talend, Big Data, Hive
Posted 1 week ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad
Remote
Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)
Posted 1 week ago
6.0 - 9.0 years
15 - 20 Lacs
Chennai
Work from Office
Skills Required: Should have a minimum 6+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.
Posted 2 weeks ago
3.0 - 5.0 years
4 - 9 Lacs
Chennai
Work from Office
Skills Required: Should have a minimum 3+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.
Posted 2 weeks ago
8.0 - 13.0 years
15 - 25 Lacs
Hyderabad, Bengaluru
Hybrid
Looking for Snowflake developer for US client, this candidate should be strong with Snowflake & DBT & should be able to do impact analysis on the current ETLs (Informatica/ Data stage) and provide solutions based on the analysis. Exp: 7- 12yrs
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2