Jobs
Interviews

1647 Adf Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Exp - 7 10 Yrs SSIS , ETL , SQL , Azure Synapse , ADF Pune chennai

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - Data Engineer Sr.Analyst ACS Song Management Level: Level 10- Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 2 weeks ago

Apply

16.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security Build and maintain data pipelines using Azure Databricks and other Azure data integration tools Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance Create, understand, and validate design and estimated effort for given module/task, and be able to justify it Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery Maintain code quality by writing clean, maintainable, and testable code Monitor performance and optimize resources to ensure cost-effectiveness and high availability Define and document best practices and strategies regarding application deployment and infrastructure maintenance Provide technical support and consultation for infrastructure questions Help develop, manage, and monitor continuous integration and delivery systems Take accountability and ownership of features and teamwork Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or MCA (16+ years of formal education) Overall 7+ years of experience 5+ years of experience in writing advanced level SQL 3+ years of experience in Azure (ADF), Databricks and DevOps 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on Azure 2+ years of experience in writing, reading, and debugging Spark, Scala, and Python code Proficiency in programming languages and scripting tools Understanding of cloud data storage and database technologies such as SQL and NoSQL Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks Proven excellent communication, writing, and presentation skills Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills Preferred Qualifications Experience and skills with Snowflake Knowledge of AI/ML or LLM (GenAI) Knowledge of US Healthcare domain and experience with healthcare data At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Requirement gathering and analysis · Design of data architecture and data model to ingest data · Experience with different databases like Synapse, SQL DB, Snowflake etc. · Design and implement data pipelines using Azure Data Factory, Databricks, Synapse · Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases · Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage · Implement data security and governance measures · Monitor and optimize data pipelines for performance and efficiency · Troubleshoot and resolve data engineering issues · Hands on experience on Azure functions and other components like realtime streaming etc · Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. · Provide optimized solution for any problem related to data engineering · Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. · Strong knowledge on Databricks, Delta tables Mandatory skill sets: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Preferred skill sets: Pyspark, Databricks Years of experience required: 7 – 10 yrs Education qualification: B.tech/MCA and MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Requirement gathering and analysis · Design of data architecture and data model to ingest data · Experience with different databases like Synapse, SQL DB, Snowflake etc. · Design and implement data pipelines using Azure Data Factory, Databricks, Synapse · Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases · Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage · Implement data security and governance measures · Monitor and optimize data pipelines for performance and efficiency · Troubleshoot and resolve data engineering issues · Hands on experience on Azure functions and other components like realtime streaming etc · Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. · Provide optimized solution for any problem related to data engineering · Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. · Strong knowledge on Databricks, Delta tables Mandatory skill sets: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Preferred skill sets: Pyspark, Databricks Years of experience required: 7 – 10 yrs Education qualification: B.tech/MCA and MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

India

Remote

Position: Oracle Cloud Technical Location - Gurugram, Kolkata, Mumbai, Chennai, Hyderabad, Bangalore, Pune (Hybrid – 2 days on-site, 3 days remote) Duration: Permanent Role Description Candidate should have atleast 5 - 8 years of experience Should have one or more project implementation experience on Cloud ERP Expertise in web services: Developing and consuming REST and SOAP based services Consuming Oracle Cloud Services including ERP Integration Service, External Report Service, HCM Data Loader, Generic Soap Port, and other common Cloud services Web service security Should have expertise in one of the following areas: Customization and Integrations using PaaS technologies such as JCS, JCS-SX, ADF, OAF, OAIC, OIC, ICS etc. Should have expertise in Data conversions using FBDI Templates and Rapid Implementation Sheets with the ability treconcile the data and correct it using ADFDi Should have expertise in developing BIP Reports, OTBI Reports, SmartView Reports and FRS Reports Should have expertise in creating and scheduling Oracle Cloud ESS Jobs and BIP Jobs Should have experience in the Integration Patterns in Cloud Should have an exposure tonsite/offshore model in order gather requirements, design the solution and create technical Specification for individual requirements Should know about cloud configuration and setup, Application Composer and Page Composer tbuild UI Extensions Should know about Oracle SaaS Security including privileges, roles and data access Should know on how tmigrate SaaS configuration(DFF, Lookups and other related functional setups) Should know best practices of migrating PaaS Solution including DBCS PLSQL/SQL codes, ICS Integrations, SOA Composite deployments and others Knowledge on ADF, OAF, MAF and Oracle EBS on-premise will be an added advantage Should possess excellent verbal and written communication skills

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

On-site

We are seeking an experienced Data Migration Lead with a strong background in Microsoft Dynamics 365 Finance & Operations (D365 F&O). The ideal candidate should have 4-6 years of experience in data migration, ETL processes, and Microsoft Azure Data Factory (ADF). The role requires proficiency in X++ programming, working with data entities, and writing complex MS SQL queries. The candidate will be responsible for leading data migration efforts, ensuring data integrity, and managing the end-to-end data migration process for Dynamics 365 F&O implementations. Requirements Lead the data migration projects for Dynamics 365 F&O implementations, including the design, development, and execution of data migration strategies Collaborate with business/client and technical teams to understand data requirements and ensure accurate data mapping, transformation, and migration Develop and implement data migration scripts and ETL processes using ADF and other relevant tools Utilize X++ in relation to data entities and data management frameworks Create and optimize MS SQL queries to extract, transform, and load data efficiently Ensure data quality, integrity, and consistency throughout the migration process Lead a team of data migration specialists, providing guidance, mentorship, and technical support Collaborate with cross-functional teams including functional consultants, developers, and business & external stakeholders to ensure successful project delivery Monitor and report on data migration progress, identifying and mitigating risks and issues as they arise Benefits Health Insurance Food Coupons Gym and Telephone/Internet bill Reimbursement Leave Travel Allowance (can be claimed only twice in a block of FOUR calendar years) PF NPS (Optional) Great Work Culture

Posted 2 weeks ago

Apply

3.0 years

5 - 40 Lacs

Gurugram, Haryana, India

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 2 weeks ago

Apply

3.0 years

5 - 40 Lacs

Chennai, Tamil Nadu, India

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 2 weeks ago

Apply

3.0 years

5 - 40 Lacs

Greater Kolkata Area

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We are looking for an experienced Power BI Full Stack Developer with strong expertise across the Microsoft BI stack. The candidate will be responsible for delivering end-to-end data visualization solutions, leveraging tools like Power BI, SQL Server, SSIS, and Azure services within a structured SDLC. Key Responsibilities: Work with stakeholders to gather business reporting and dashboarding requirements Design and develop end-to-end Power BI solutions, including datasets, dataflows, and reports Build optimized data models (star/snowflake) and implement advanced DAX measures Prepare and transform data using Power Query (M language) Write and maintain complex SQL queries, stored procedures, and views in SQL Server Integrate data from Microsoft stack components – SQL Server, Excel, SSAS, SSIS, ADF, Azure SQL Deploy and schedule reports in Power BI Service, manage workspace access and RLS Collaborate with backend, ETL, and DevOps teams for seamless data integration and deployment Apply performance tuning and implement best practices for Power BI governance Prepare technical documentation and support testing, deployment, and production maintenance Required Skills: Hands-on experience with Power BI Desktop, Power BI Service, and DAX Strong SQL Server experience (T-SQL, views, stored procedures, indexing) Proficiency in Power Query, data modeling, and data transformation Experience with Microsoft BI stack: SSIS (ETL workflows) SSAS (Tabular Model preferred) Azure SQL Database, Azure Data Factory (ADF) Understanding of CI/CD for BI artifacts and use of DevOps tools (e.g., Azure DevOps, Git) Preferred Qualifications: Experience in banking or financial services domain Familiarity with enterprise data warehouse concepts and reporting frameworks Strong problem-solving skills and ability to present insights to business users

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Greetings!!! Role:- Senior Data Engineer with Databricks Experience:- 4+ Years Location- Remote Duration: 3 Months Contract Required Skills & Experience: o 3+ years’ experience of Hands-on in data structures, distributed systems, Spark, SQL, PySpark, NoSQL Databases o Strong software development skills in at least one of: Python, Pyspark or Scala. o Develop and maintain scalable & modular data pipelines using databricks & Apache Spark o Experience working on either of the cloud storages - AWS S3, Azure data lake, GCS buckets o Exposure to the ELT tools offered by the cloud platforms like, ADF, AWS Glue, Google dataflow o Integrate Databricks with other cloud services like AWS, Azure or Google Cloud If you are interested , please share your resume to prachi@iitjobs.com

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description We are seeking a highly experienced and motivated Azure Cloud Administrator with a strong background in Windows Server infrastructure , Azure IaaS/PaaS services , and cloud networking . The ideal candidate will have over 10 years of relevant experience and will be responsible for managing and optimizing our Azure environment while ensuring high availability, scalability, and security of our infrastructure. Key Responsibilities Administer and manage Azure Cloud infrastructure including both IaaS and PaaS services. Deploy, configure, and maintain Windows Servers (2016/2019/2022). Manage Azure resources such as Virtual Machines, Storage Accounts, SQL Managed Instances, Azure Functions, Logic Apps, App Services, Azure Monitor, Azure Key Vault, Azure Recovery Services, Databricks, ADF, Synapse, and more. Ensure security and network compliance through effective use of Azure Networking features including NSGs, Load Balancers, and VPN gateways. Monitor and troubleshoot infrastructure issues using tools such as Log Analytics, Application Insights, and Azure Metrics. Perform server health checks, patch management, upgrades, backup/restoration, and DR testing. Implement and maintain Group Policies, DNS, IIS, Active Directory, and Entra ID (formerly Azure AD). Collaborate with DevOps teams to support infrastructure automation using Terraform and Azure DevOps. Support ITIL-based processes including incident, change, and problem management. Deliver Root Cause Analysis (RCA) and post-incident reviews for high-severity issues. Provide after-hours support as required during outages or maintenance windows. Required Technical Skills Windows Server Administration – Deep expertise in Windows Server 2016/2019/2022. Azure Administration – Strong hands-on experience with Azure IaaS/PaaS services. Azure Networking – Solid understanding of cloud networking principles and security best practices. Azure Monitoring – Familiarity with Azure Monitor, Log Analytics, Application Insights. Infrastructure Tools – Experience with Microsoft IIS, DNS, AD, Group Policy, and Entra ID Connect. Cloud Automation – Good to have working knowledge of Terraform and Azure DevOps pipelines. Troubleshooting & RCA – Proven ability to analyze, resolve, and document complex technical issues. Skills Azure,Windows, Monitoring

Posted 2 weeks ago

Apply

3.0 years

5 - 40 Lacs

Pune, Maharashtra, India

On-site

Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Only Immediate Joiners (Within 12-15 days) 4+ Years Bangalore, Hyderabad, Pune, Coimbatore (Hybrid- Onsite) Face to face interview About the Role We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure . You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure , with exposure to AWS as needed. Key Responsibilities Assess application readiness and document architecture, dependencies, and migration strategy. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate , CloudockIt , PowerShell . Create architecture diagrams , migration playbooks , and maintain Azure DevOps boards. Set up applications both on-premises and in cloud environments (primarily Azure). Support proof-of-concepts (PoCs) and advise on migration options. Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams. Track progress, blockers, and risks, reporting timely status to project leadership. Required Skills 4+ years of experience in cloud migration and assessment Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.) Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3) Experience with Java (SpringBoot)/C#, .Net/Python , Angular/React.js , REST APIs Working knowledge of Kafka , Docker/Kubernetes , Azure DevOps Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs) IAM knowledge: OAuth, SAML, Okta/SiteMinder Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB Preferred Qualifications Azure or AWS certifications

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Skill: Azure Data Engineer Primary: ADB,ADF,Pyspark Secondary: SQL Experience: 5-8 Yrs Notice period: Imm to 30 DDays

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with the Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities : Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Job Type: Full-Time Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Role : Azure Data Engineer(ADF, ADB) with Pyspark and PL/SQL JOB LOCATION : kolkata EXPERIENCE REQUIREMENT : 5+ Technical Skill Set : Azure Data Engineer, ADF, Azure Databricks Spark (PySpark or Scala), Python, PL/SQL Must have: Strong experience in Azure Data Factory , ADB( Azure Databricks) Synapse, pyspark; establishing the cloud connectivity between different system like ADLS, ADF, Synapse, Databricks etc. · A minimum of 5 years' experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners, · Minimum 5 years of troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tuning, ETL importing large volume of data extracted from multiple systems, capacity planning · Experience in TSQL programming along with Azure Data Factory framework and Python scripting · Work well independently as well as within a team · Proactive, organized, excellent analytical and problem-solving skills · Flexible and willing to learn, can-do attitude is key · Strong verbal and written communication skills Good-to-Have : Financial institution data mart experience is an asset. · Experience in .net application is an asset · Experience and expertise in Tableau driven dashboard design is an asset Responsibility of / Expectations from the Role: 1 Azure Data Engineer (ADF,ADB) 2 ETL processes using frameworks like Azure Data Factory or Synapse or Databricks; 3 Establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks 4 TSQL programming along with Azure Data Factory framework and Python scripting.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Role : Azure Data Engineer(ADF, ADB) with Pyspark and PL/SQL JOB LOCATION : Bhubaneswar EXPERIENCE REQUIREMENT : 5+ Technical Skill Set : Azure Data Engineer, ADF, Azure Databricks Spark (PySpark or Scala), Python, PL/SQL Must have: Strong experience in Azure Data Factory , ADB( Azure Databricks) Synapse, pyspark; establishing the cloud connectivity between different system like ADLS, ADF, Synapse, Databricks etc. · A minimum of 5 years' experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners, · Minimum 5 years of troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tuning, ETL importing large volume of data extracted from multiple systems, capacity planning · Experience in TSQL programming along with Azure Data Factory framework and Python scripting · Work well independently as well as within a team · Proactive, organized, excellent analytical and problem-solving skills · Flexible and willing to learn, can-do attitude is key · Strong verbal and written communication skills Good-to-Have : Financial institution data mart experience is an asset. · Experience in .net application is an asset · Experience and expertise in Tableau driven dashboard design is an asset Responsibility of / Expectations from the Role: 1 Azure Data Engineer (ADF,ADB) 2 ETL processes using frameworks like Azure Data Factory or Synapse or Databricks; 3 Establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks 4 TSQL programming along with Azure Data Factory framework and Python scripting.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role : Azure Data Engineer(ADF, ADB) with Pyspark and PL/SQL JOB LOCATION : Pune EXPERIENCE REQUIREMENT : 8+ Technical Skill Set : Azure Data Engineer, ADF, Azure Databricks Spark (PySpark or Scala), Python, PL/SQL Must have: Strong experience in Azure Data Factory , ADB( Azure Databricks) Synapse, pyspark; establishing the cloud connectivity between different system like ADLS, ADF, Synapse, Databricks etc. · A minimum of 5 years' experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners, · Minimum 5 years of troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tuning, ETL importing large volume of data extracted from multiple systems, capacity planning · Experience in TSQL programming along with Azure Data Factory framework and Python scripting · Work well independently as well as within a team · Proactive, organized, excellent analytical and problem-solving skills · Flexible and willing to learn, can-do attitude is key · Strong verbal and written communication skills Good-to-Have : Financial institution data mart experience is an asset. · Experience in .net application is an asset · Experience and expertise in Tableau driven dashboard design is an asset Responsibility of / Expectations from the Role: 1 Azure Data Engineer (ADF,ADB) 2 ETL processes using frameworks like Azure Data Factory or Synapse or Databricks; 3 Establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks 4 TSQL programming along with Azure Data Factory framework and Python scripting.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies