Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
Adient is a leading global automotive seating supplier, supporting all major automakers in the differentiation of their vehicles through superior quality, technology, and performance. We are seeking a Sr. Data Analytics Lead to help build Adients data and analytics foundation, directly benefitting our internal business units, and our Consumers. You are self-motivated and data-curious, especially about how data can be used to optimize business opportunities. In this role you will own projects end-to-end, from conception to operationalization, demonstrating your comprehensive understanding of the full data product development lifecycle. You will employ various analytical techniques to solve complex problems, drive scalable cloud data architectures, and deliver data products to enhance decision making across the organization. In this role, you will also own the technical support for released applications being used by internal Adient teams. This includes the daily triage of problem tickets and change requests. You will have 2-3 developer direct reports to accommodate this support as well as new development. The successful candidate can lead medium to large scale analytics projects requiring minimal direction, is highly proficient in SQL and cloud-based technologies, has good communication skills, takes the initiative to explore and tackle problems, and is an effective people leader. The ideal candidate will be working within Adients Advanced Analytics team. You will be a part of an empowered, highly capable team collaborating with Business Relationship Managers, Product Owners, Data Engineers, Production Support, and Visualization Developers within multiple business units to understand the data analytics needs and translate those requirements into world-class solution architectures. You will lead and mentor a team of solution architects to research, analyze, implement, and support scalable data product solutions that power Adients analytics across the enterprise, and deliver on business priorities. Own technical support for released internal analytics applications. This includes the daily triage of problem tickets and change requests. Lead development and execution of reporting and analytics products to enable data-driven business decisions that will drive performance and lead to the accomplishment of annual goals. You will be leading, hiring, developing, and evolving the Analytics team and providing them technical direction with the support of other leads and architects. Understand the road ahead and ensure the team has the skills and tools necessary to succeed. Drive the team to develop operationally efficient analytic solutions. Manage resources/budget and partner with functional and business teams. Advocate sound software development practices and help develop and evangelize great engineering and organizational practices. You will be leading the team that designs and builds highly scalable data pipelines using new generation tools and technologies like Azure, Snowflake, Spark, Databricks, SQL, Python to induct data from various systems. Work with product owners to ensure priorities are understood and direct the team to support the vision of the larger Analytics organization. Translate complex business problem statements into analysis requirements and work with internal customers to define data product details based on expressed partner needs. Work closely with business and technical teams to deliver enterprise-grade datasets that are reliable, flexible, scalable, and provide low cost of ownership. Develop SQL queries and data visualizations to fulfill internal customer application reporting requirements, as well as ad-hoc analysis requests using tools such as PowerBI. Thoroughly document business requirements, data architecture solutions, and processes for business and technical audiences. Serve as a domain specialist on data and business processes within your area of focus and find solutions to operational or data issues in the data pipelines. Grow the technical ability of the team. QUALIFICATIONS - Bachelors Degree or Equivalent with 8+ years of experience in data engineering, computer science, or statistics field with at least 2+ years of experience in leadership/management. - Experience in developing Big Data cloud-based applications using the following technologies: SQL, Azure, Snowflake, PowerBI. - Experience building complex ADF data pipelines and Data Flows to ingest data from on-prem sources, transform, and sink into Snowflake. Good understanding of ADF pipelining Activities. - Familiar with various Azure connectors to establish on-prem data-source connectivity, as well as Snowflake data-warehouse connectivity over private network. - Lead/Work with hybrid teams, communicate effectively, both written and verbal, with technical and non-technical multi-functional teams. - Translate complex business requirements into scalable technical solutions meeting data warehousing design standards. Solid understanding of analytics needs and proactive-ness to build generic solutions to improve efficiency. - Experience with data visualization and dashboarding techniques to make complex data more accessible, understandable, and usable to drive business decisions and outcomes. Efficient in PowerBI. - Extensive experience in data architecture, defining and maintaining data assets, and developing data architecture strategies to support reporting and data visualization tools. - Understands common analytical data models like Kimball. Ensures physical data models align with best practice and requirements. - Thrives in a dynamic environment, keeping composure and a positive attitude. - A plus if your experience was in distribution or manufacturing organizations. PREFERRED - Experience with Snowflake cloud data warehouse. - Experience with Azure PaaS services. - Experience with TSQL, SQL Server, Azure SQL, Snowflake SQL, Oracle SQL. - Experience with Azure Storage account connectivity. - Experience developing visualizations with PowerBI and BusinessObjects. - Experience with Databricks. - Experience with ADLS Gen2. - Experience with Azure VNet private endpoints on a private network. - Proficient with Spark and Python. - Advanced proficiency in SQL, joining multiple data sets across different data grains, query optimization, pivoting data. - MS Azure Certifications. - Snowflake Certifications. - Experience with other leading commercial Cloud platforms like AWS. - Experience with installing and configuring ODBC, JDBC drivers on Windows. - Candidate resides in the Plymouth MI area. PRIMARY LOCATION Pune Tech Center.,
Posted 3 days ago
5.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You are a seasoned Delivery Lead specializing in Azure Integration Services with over 12 years of experience. Your role involves managing and delivering enterprise-grade Azure projects, including implementation, migration, and upgrades. As a strategic leader, you should have in-depth expertise in Azure services and a proven track record of successfully managing enterprise customers and driving project success across Azure Integration and Data platforms. Your key responsibilities include leading end-to-end delivery of Azure integration, data, and analytics projects, ensuring scope, timeline, and budget adherence. You will plan and manage execution roadmaps, define milestones, handle dependencies, and oversee enterprise-level implementations, migrations, and upgrades using Azure services while ensuring compliance with best practices in security, performance, and governance. In terms of customer and stakeholder engagement, you will collaborate with enterprise customers to understand their business needs and translate them into technical solutions. Additionally, you will serve as a trusted advisor to clients, aligning technology with business objectives, and engage and manage stakeholders, including business users, architects, and engineering teams. Your technical leadership responsibilities include defining and guiding architecture, design patterns, and best practices for Azure Integration Services. You will deliver integration solutions using various Azure services such as Logic Apps, APIM, Azure Functions, Event Grid, and Service Bus. Leveraging ADF, Azure Databricks, and Synapse Analytics for data processing and analytics will be crucial, along with promoting automation and DevOps culture within the team. As the Delivery Lead, you will lead a cross-functional team of Azure developers, engineers, and architects, provide technical mentorship, and drive team performance. You will also coordinate with Microsoft and third-party vendors to ensure seamless delivery and support pre-sales activities by contributing to solution architecture, proposals, and effort estimation. To excel in this role, you must possess deep expertise in Azure Integration Services, hands-on experience with Azure App Services, Microservices architecture, and serverless solutions, and proficiency in data platforms such as Azure Data Factory, Azure Databricks, Synapse Analytics, and ADLS Gen2. A solid understanding of Azure security and governance tools is essential, along with experience in DevOps tools like Azure DevOps, CI/CD, Terraform, and ARM templates. In terms of professional experience, you should have at least 10 years in IT with a minimum of 5 years in Azure integration and data platforms. A proven track record in leading enterprise migration and implementation projects, sound knowledge of hybrid, on-prem, and cloud-native integration architectures, and experience in delivering projects using Agile, Scrum, and DevOps frameworks are required. Your soft skills should include strong leadership and stakeholder engagement abilities, effective problem-solving skills, and excellent verbal and written communication, presentation, and documentation skills. Preferred qualifications for this role include Microsoft certifications in Azure Solutions Architecture, Integration Services, or Data Engineering, experience in integration with SAP, Salesforce, or other enterprise applications, and awareness of AI/ML use cases within Azure's data ecosystem. This role is primarily based in Noida with a hybrid work model, and you should be willing to travel for client meetings as required.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Azure Data Engineer at CGI, you will have the opportunity to be part of a dynamic team of builders who are dedicated to helping clients succeed. With our global resources, expertise, and stability, we aim to achieve results for our clients and members. If you are looking for a challenging role that offers professional growth and development, this is the perfect opportunity for you. In this role, you will be responsible for supporting the development and maintenance of our trading and risk data platform. Your main focus will be on designing and building data foundations and end-to-end solutions to maximize the value from data. You will collaborate with other data professionals to integrate and enrich trade data from various ETRM systems and create scalable solutions to enhance the usage of TRM data across different platforms and teams. Key Responsibilities: - Implement and manage lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL). - Utilize SQL, Python, Apache Spark, and Delta Lake for data engineering tasks. - Implement data integration techniques, ETL processes, and data pipeline architectures. - Develop CI/CD pipelines for code management using GIT. - Create and maintain technical documentation for the platform. - Ensure the platform is developed with software engineering, data analytics, and data security best practices. - Optimize data processing and storage systems for high performance, reliability, and security. - Work in Agile Methodology and utilize ADO Boards for Sprint deliveries. - Demonstrate excellent communication skills to convey technical and business concepts effectively. - Collaborate with team members at all levels to share ideas and knowledge effectively. Required Qualifications: - Bachelor's degree in computer science or related field. - 6 to 10 years of experience in software development/engineering. - Proficiency in Azure technologies including Databricks, ADLS Gen2, ADF, and Azure SQL. - Strong hands-on experience with SQL, Python, Apache Spark, and Delta Lake. - Knowledge of data integration techniques, ETL processes, and data pipeline architectures. - Experience in building CI/CD pipelines and using GIT for code management. - Familiarity with Agile Methodology and ADO Boards for Sprint deliveries. At CGI, we believe in ownership, teamwork, respect, and belonging. As a CGI Partner, you will have the opportunity to turn meaningful insights into action, develop innovative solutions, and collaborate with a diverse team to shape your career and contribute to our collective success. Join us on this exciting journey of growth and innovation at one of the largest IT and business consulting services firms in the world.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineer within our team, you will play a crucial role in enhancing and supporting existing Data & Analytics solutions by utilizing Azure Data Engineering technologies. Your primary focus will be on developing, maintaining, and deploying IT products and solutions that cater to various business users, with a strong emphasis on performance, scalability, and reliability. Your responsibilities will include incident classification and prioritization, log analysis, coordination with SMEs, escalation of complex issues, root cause analysis, stakeholder communication, code reviews, bug fixing, enhancements, and performance tuning. You will design, develop, and support data pipelines using Azure services, implement ETL techniques, cleanse and transform datasets, orchestrate workflows, and collaborate with both business and technical teams. To excel in this role, you should possess 3 to 6 years of experience in IT and Azure data engineering technologies, with a strong command over Azure Databricks, Azure Synapse, ADLS Gen2, Python, PySpark, SQL, JSON, Parquet, Teradata, Snowflake, Azure DevOps, and CI/CD pipeline deployments. Knowledge of Data Warehousing concepts, data modeling best practices, and familiarity with SNOW (ServiceNow) will be advantageous. In addition to technical skills, you should demonstrate the ability to work independently and in virtual teams, strong analytical and problem-solving abilities, experience in Agile practices, effective task and time management, and clear communication and documentation skills. Experience with Business Intelligence tools, particularly Power BI, and possessing the DP-203 certification (Azure Data Engineer Associate) will be considered a plus. Join us in Chennai, Tamilnadu, India, and be part of our dynamic team working in the FMCG/Foods/Beverage domain.,
Posted 1 week ago
3.0 - 5.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives. Roles and Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks . Build and optimize data flows and transformations for structured and unstructured data. Develop scalable ETL/ELT processes to extract data from various sources including SQL, APIs, and flat files. Implement data quality checks, error handling, and performance tuning of data pipelines. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Work with Azure services such as Azure Data Lake Storage (ADLS) , Azure Synapse Analytics , and Azure SQL . Participate in code reviews, version control, and CI/CD processes. Ensure data security, privacy, and compliance with governance standards. Strong hands-on experience with Azure Data Factory and Azure Databricks (Spark-based development). Proficiency in Python , SQL , and PySpark for data manipulation. Experience with Delta Lake , data versioning , and streaming/batch data processing . Working knowledge of Azure services such as ADLS, Azure Blob Storage, and Azure Key Vault. Familiarity with DevOps , Git , and CI/CD pipelines in data engineering workflows. Strong understanding of data modeling, data warehousing, and performance tuning. Excellent analytical, communication, and problem-solving skills.
Posted 1 week ago
9.0 - 12.0 years
15 - 20 Lacs
Chennai
Work from Office
Job Title:Data Engineer Lead / Architect (ADF)Experience9-12YearsLocation:Remote / Hybrid : Role and ResponsibilitiesTalk to client stakeholders, and understand the requirements for building their data warehouse / data lake / data Lakehouse. Design, develop and maintain data pipelines in Azure Data Factory (ADF) for ETL from on-premise and cloud based sources Design, develop and maintain data warehouses and data lakes in Azure Run large data platform and other related programs to provide business intelligence support Design and Develop data models to support business intelligence solutions Implement best practices in data modelling and data warehousing Troubleshoot and resolve issues related to ETL and data connections Skills Required: Excellent written and verbal communication skills Excellent knowledge and experience in ADF Well versed with ADLS Gen 2 Knowledge of SQL for data extraction and transformation Ability to work with various data sources (Excel, SQL databases, APIs, etc.) Knowledge in SAS would be added advantage Knowledge in Power BI would be added advantage
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
kerala
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We're looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your Key Responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach. Work independently to gather requirements, cleansing extraction and loading of data. Translate business and analyst requirements into technical code. Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing. Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills And Attributes For Success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric. Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics. Strong SQL Skills and experience with one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations. Quick learner with a can-do attitude. Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members. To qualify for the role, you must have A bachelor's or master's degree. A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred. Ideally, you'll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision. What Working At EY Offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around. Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 2 weeks ago
10.0 - 16.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 8+ years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 5-10 years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.
Posted 1 month ago
3.0 - 8.0 years
3 - 6 Lacs
Bengaluru
Work from Office
We are looking for a skilled SQL PySpark professional with 3 to 8 years of experience to join our team. The ideal candidate will have expertise in developing data pipelines and transforming data using Databricks, Synapse notebooks, and Azure Data Factory. Roles and Responsibility Collaborate with technical architects and cloud solutions teams to design data pipelines, marts, and reporting solutions. Code, test, and optimize Databricks jobs for efficient data processing and report generation. Set up scalable data pipelines integrating with various data sources and cloud platforms using Databricks. Ensure best practices are followed in terms of code quality, data security, and scalability. Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms. Collaborate with data scientists and analysts to support machine learning workflows and analytic needs. Stay updated with the latest developments in Databricks and associated technologies to drive innovation. Job Proficiency in PySpark or Scala and SQL for data processing tasks. Hands-on experience with Azure Databricks, Delta Lake, Delta Live tables, Auto Loader, and Databricks SQL. Expertise with Azure Data Lake Storage (ADLS) Gen2 for optimized data storage and retrieval. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Experience with Power BI for dashboarding and reporting is a plus. Familiarity with Azure Synapse for analytics and integration tasks is desirable. Knowledge of Spark Streaming for real-time data stream processing is an advantage. MLOps knowledge for integrating machine learning into production workflows is beneficial. Familiarity with Azure Resource Manager (ARM) templates for infrastructure as code (IaC) practices is preferred. Demonstrated expertise of 4-5 years in developing data ingestion and transformation pipelines using Databricks, Synapse notebooks, and Azure Data Factory. Solid understanding and hands-on experience with Delta tables, Delta Lake, and Azure Data Lake Storage Gen2. Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation. Proficiency in building and optimizing query layers using Databricks SQL. Demonstrated experience integrating Databricks with Azure Synapse, ADLS Gen2, and Power BI for end-to-end analytics solutions. Prior experience in developing, optimizing, and deploying Power BI reports. Familiarity with modern CI/CD practices, especially in the context of Databricks and cloud-native solutions.
Posted 1 month ago
8.0 - 12.0 years
20 - 32 Lacs
Hyderabad, Ahmedabad
Hybrid
Were Hiring: Senior Data Engineer – Azure & Snowflake Expert Location: Hyderabad / Ahmedabad Experience: 8–12 Years Immediate Joiners Preferred Are you passionate about designing scalable data pipelines and building high-performing data platforms in the cloud? We are looking for a Senior Data Engineer with strong hands-on expertise in Snowflake and Azure Data Factory to join our growing team. Key Responsibilities: Design and optimize scalable data pipelines for large datasets. Develop and orchestrate ETL/ELT workflows using Azure Data Factory (ADF) . Manage data storage with Azure Blob Storage and ADLS Gen2 . Implement event-driven automations using Azure Logic Apps . Write robust SQL queries, stored procedures, and build data models. Ensure data quality, security, and governance practices are enforced. Troubleshoot and optimize existing pipelines and infrastructure. Must-Have Skills : Expert-level Snowflake knowledge – design, development, and optimization. Proficiency in the Azure data ecosystem : ADF, Blob Storage, ADLS Gen2, Logic Apps. Strong SQL expertise for complex data manipulation. Familiarity with Git and version control. Excellent problem-solving and communication skills. Nice to Have : Experience with dbt (data build tool) . Knowledge of Python and DevOps/CI-CD practices for data engineering.
Posted 1 month ago
6.0 - 11.0 years
5 - 9 Lacs
Hyderabad
Work from Office
6+ years of experience in Data engineering projects using COSMOS DB- Azure Databricks (Min 3-5 projects) Strong expertise in building data engineering solutions using Azure Databricks, Cosmos DB Strong T-SQL programming skills or with any other flavor of SQL Experience working with high volume data, large objects, complex data transformations Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. Good understanding of data modelling for data warehouse and data marts Strong verbal and written communication skills Ability to learn, contribute and grow in a fast phased environment Nice to have: Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, ADLS Gen2, Azure Events Hub Experience using Jira and ServiceNow in project environments Experience in implementing Datawarehouse and ETL solutions
Posted 1 month ago
10.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Req ID: 329815 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DOT NET Full Stack + Azure Developer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Experience: 10 to 12 years Responsibilities: Design and architect web applications using ASP.NET Core 6 and C# for backend development. Develop single-page applications (SPAs) using React and Redux for frontend development. Implement unit testing frameworks such as xUnit for backend code and Jest for frontend code to ensure high-quality software. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Maintain code quality, organization, and automation. Provide technical leadership and mentoring to junior & senior developers. Stay updated with the latest industry trends and technologies. Required Skills: Proficiency in ASP.NET Core 6, C#, React, Redux, and Python. Strong experience with RESTful API design and implementation. Expertise in SQL Server and database management. Hands-on experience with Azure services including Azure Cognitive search, Azure Web App, Azure App Service, Azure Function App, Azure Application Insights, Azure Logic App, Azure Data Factory, Azure Search Service, Azure SQL, ADLS Gen2, Azure Storage Account, Azure Key Vault, API Connection, Alert Rules, and Azure AI. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune
Posted 1 month ago
6.0 - 11.0 years
6 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Key Responsibilities: Build scalable ETL pipelines and implement robust data solutions using Azure technologies. Manage and orchestrate workflows with Azure Data Factory (ADF), Databricks, ADLS Gen2, and Azure Key Vault. Design, maintain, and optimize secure and efficient data lake architectures. Collaborate with stakeholders to gather requirements and translate them into detailed technical specifications. Implement CI/CD pipelines to enable automated, seamless data deployment leveraging Azure DevOps. Monitor and troubleshoot data quality, performance bottlenecks, and scalability issues in production pipelines. Write clean, modular, and reusable PySpark code adhering to Agile development methodologies. Maintain thorough documentation of data pipelines, architecture designs, and best practices for team reuse. Must-Have Skills: 6+ years of experience in Data Engineering roles. Strong expertise with SQL, Python, PySpark, Apache Spark. Hands-on experience with Azure Databricks, Azure Data Factory (ADF), ADLS Gen2, Azure DevOps, and Azure Key Vault. Deep knowledge of data warehousing concepts, ETL development, data modeling, and governance. Familiarity with Agile software development lifecycle (SDLC) and containerization tools like Docker. Commitment to clean coding practices and maintaining high-quality codebases. Good to Have Skills: Experience with Azure Event Hubs and Logic Apps. Exposure to Power BI for data visualization. Strong problem-solving skills with a background in logic building and competitive programming.
Posted 1 month ago
8.0 - 12.0 years
12 - 16 Lacs
Pune
Work from Office
Roles & Responsibilities: Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities. Requirements: Hands-on experience with Microsoft Fabric , including Lakehouse, Data Factory, and Synapse . Strong expertise in PySpark and Python for large-scale data processing and transformation. Deep knowledge of Azure data services (ADLS Gen2, Azure Databricks, Synapse, ADF, Azure SQL, etc.). Experience in designing, implementing, and optimizing end-to-end data pipelines on Azure. Understanding of Azure infrastructure setup (networking, security, and access management) is good to have. Healthcare domain knowledge is a plus but not mandatory.
Posted 1 month ago
5.0 - 10.0 years
18 - 30 Lacs
Noida
Remote
Role Title: Sr. Azure Data Platform Engineer Location: India 1 remote role and 5 WFO Noida location candidates need to have L2 and L3 Support experience with the below) We are seeking an Azure Data Platform Engineer with a strong focus on Administration and hands-on experience in Azure platform engineering services. Ideal candidates should have expertise in administering services such as: Azure Key Vault Function App & Logic App Event Hub App Services Azure Data Factory (Administration) Azure Monitor & Log Analytics Azure Databricks (Administration) ETL processes Cosmos DB (Administration) Azure DevOps & CI/CD pipelines Azure Synapse Analytics (Administration) Python / Shell scripting Azure Data Lake Storage (ADLS) Azure Kubernetes Service (AKS) Additional knowledge of Tableau and Power BI would be a plus. Also, candidates should have hands-on experience managing and ensuring the stability, security, and performance of these platforms, with a focus on automation, monitoring, and incident management. Proficient in distributed system architectures and Azure Data Engineering services like Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App , Logic App, and App Services Implement, and manage infrastructure using IaC tools such as Azure Resource Manager (ARM) templates and Terraform. Manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS). Set up and manage monitoring, logging, and alerting systems using Azure Monitor, Log Analytics, and Application Insights. Implement disaster recovery (DR) strategies, backups, and failover mechanisms for critical workloads. Automate infrastructure provisioning, scaling, and management for high availability and efficiency. Experienced in managing and maintaining clusters across Development, Test, Preproduction, and Production environments on Azure. Skilled in defining, scheduling, and monitoring job flows, with proactive alert setup. Adept at troubleshooting failed jobs in azure tools like Databricks and Data Factory, performing root cause analysis, and applying corrective measures. Hands-on experience with distributed streaming tools like Event Hub. Expertise in designing and managing backup and disaster recovery solutions using Infrastructure as Code (IaC) with Terraform. Strong experience in automating processes using Python, shell scripting, and working with Jenkins and Azure DevOps. Proficient in designing and maintaining Azure CI/CD pipelines for seamless code integration, testing, and deployment. Experienced in monitoring and troubleshooting VM resources such as memory, CPU, OS, storage, and network. Skilled at monitoring applications and advising developers on improving job and workflow performance. Capable of reviewing and resolving log file issues for system and application components. Adaptable to evolving technologies, with a strong sense of responsibility and accomplishment. Knowledgeable in agile methodologies for software delivery. 5-15 years of experience with Azure and cloud platforms, leveraging cloud-native tools to build, manage, and optimize secure, scalable solutions.
Posted 1 month ago
3.0 - 5.0 years
7 - 10 Lacs
Pune
Work from Office
Job Title: Data Engineer Location : Pune, India (On-site) Experience : 3 5 years Employment Type: Full-time Job Summary We are looking for a hands-on Data Engineer who can design and build modern Lakehouse solutions on Microsoft Azure. You will own data ingestion from source-system APIs through Azure Data Factory into OneLake, curate bronze silver gold layers on Delta Lake, and deliver dimensional models that power analytics at scale. Key Responsibilities Build secure, scalable Azure Data Factory pipelines that ingest data from APIs, files, and databases into OneLake. Curate raw data into Delta Lake tables on ADLS Gen 2 using the Medallion (bronze silver gold) architecture, ensuring ACID compliance and optimal performance. Develop and optimize SQL/Spark SQL transformations in Azure Fabric Warehouse / Lakehouse environments. Apply dimensional-modelling best practices (star/snowflake, surrogate keys, SCDs) to create analytics-ready datasets. Implement monitoring, alerting, lineage, and CI/CD (Git/Azure DevOps) for all pipelines and artifacts. Document data flows, data dictionaries, and operational runbooks. Must-Have Technical Skills Azure Fabric & Lakehouse experience Azure Fabric Warehouse experience / Azure Synapse Data Factory building, parameterizing, and orchestrating API-driven ingestion pipelines ADLS Gen 2 + Delta Lake Strong SQL advanced querying, tuning, and procedural extensions (T-SQL / Spark SQL) Data-warehousing & Dimensional Modelling concepts Good-to-Have Skills Python (PySpark, automation, data-quality checks) Unix/Linux shell scripting DevOps (Git, Azure DevOps) Education & Certifications BE / B. Tech in computer science, Information Systems, or related field Preferred: Microsoft DP-203 Azure Data Engineer Associate Soft Skills Analytical, detail-oriented, and proactive problem solver Clear written and verbal communication; ability to simplify complex topics Collaborative and adaptable within agile, cross-functional teams
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India Duration: 6 Months Email to Apply: navaneeta@suzva.com Contact: 9032956160
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 68 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Hybrid
The Operations Engineer will work in collaboration with and under the direction of the Manager of Data Engineering, Advanced Analytics to provide operational services, governance, and incident management solutions for the Analytics team. This includes modifying existing data ingestion workflows, releases to QA and Prod, working closely with cross functional teams and providing production support for daily issues. Essential Job Functions: * Takes ownership of customer issues reported and see problems through to resolution * Researches, diagnoses, troubleshoots and identifies solutions to resolve customer issues * Follows standard procedures for proper escalation of unresolved issues to the appropriate internal teams * Provides prompt and accurate feedback to customers * Ensures proper recording and closure of all issues * Prepares accurate and timely reports * Documents knowledge in the form of knowledge base tech notes and articles Other Responsibilities: * Be part of on-call rotation * Support QA and production releases, off-hours if needed * Work with developers to troubleshoot issues * Attend daily standups * Create and maintain support documentation (Jira/Confluence) Minimum Qualifications and Job Requirements: * Proven working experience in enterprise technical support * Basic knowledge of systems, utilities, and scripting * Strong problem-solving skills * Excellent client-facing skills * Excellent written and verbal communication skills * Experience with Microsoft Azure including Azure Data Factory (ADF), Databricks, ADLS (Gen2) * Experience with system administration and SFTP * Experience leveraging analytics team tools such as Alteryx or other ETL tools * Experience with data visualization software (e.g. Domo, Datorama) * Experience with SQL programming * Experience automating routine data tasks using various software tools (e.g., Jenkins, Nexus, SonarQube, Rundeck, Task Scheduler)
Posted 2 months ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough