Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
The Business Analyst position at Piramal Critical Care (PCC) within the IT department in Kurla, Mumbai involves acting as a liaison between PCC system users, software support vendors, and internal IT support teams. The ideal candidate is expected to be a technical contributor and advisor to PCC business users, assisting in defining strategic application development and integration to support business processes effectively. Key stakeholders for this role include internal teams such as Supply Chain, Finance, Infrastructure, PPL Corporate, and Quality, as well as external stakeholders like the MS Support team, 3PLs, and Consultants. The Business Analyst will report to the Chief Manager- IT Business Partner. The ideal candidate should hold a B.S. in Information Technology, Computer Science, or equivalent, with 8-10 years of experience in Data warehousing, BI, Analytics, and ETL tools. Experience in the Pharmaceutical or Medical Device industry is required, along with familiarity with large global Reporting tools like Qlik/Power BI, SQL, Microsoft Power Platform, and other related platforms. Knowledge of computer system validation lifecycle, project management tools, and office tools is also essential. Key responsibilities of the Business Analyst role include defining user and technical requirements, leading implementation of Data Warehousing, Analytics, and ETL systems, managing vendor project teams, maintaining partnerships with business teams, and proposing IT budgets. The candidate will collaborate with IT and business teams, manage ongoing business applications, ensure system security, and present project updates to the IT Steering committee. The successful candidate must possess excellent interpersonal and communication skills, self-motivation, proactive customer service attitude, leadership abilities, and a strong service focus. They should be capable of effectively communicating business needs to technology teams, managing stakeholder expectations, and working collaboratively to achieve results. Piramal Critical Care (PCC) is a subsidiary of Piramal Pharma Limited (PPL) and is a global player in hospital generics, particularly Inhaled Anaesthetics. PCC is committed to delivering critical care solutions globally and maintaining sustainable growth for stakeholders. With a wide presence across the USA, Europe, and over 100 countries, PCC's product portfolio includes Inhalation Anaesthetics and Intrathecal Baclofen therapy. PCC's workforce comprises over 400 employees across 16 countries and is dedicated to expanding its global footprint through new product additions in critical care. Committed to corporate social responsibility, PCC collaborates with partner organizations to provide hope and resources to those in need while caring for the environment.,
Posted 4 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
Thoucentric, the Consulting arm of Xoriant, a leading digital engineering services company with 5000 employees, is currently seeking a highly experienced Infrastructure Architect with deep DevOps expertise to lead the cloud infrastructure planning and deployment for a global o9 supply chain platform rollout. As the Infrastructure Architect, you will play a crucial role in designing and implementing cloud-native architecture for the o9 platform rollout, ensuring robust, scalable, and future-proof infrastructure across Azure and/or AWS environments. Your responsibilities will include collaborating closely with o9's DevOps team to deploy and manage Kubernetes clusters, overseeing Git-based CI/CD workflows, implementing monitoring and alerting frameworks, and acting as a strategic liaison between the client's IT organization and o9's DevOps team to align platform requirements and deployment timelines. Additionally, you will be responsible for ensuring high availability, low latency, and high throughput to support supply chain operations needs, while anticipating future growth and scalability requirements. To be successful in this role, you should have at least 10 years of experience in Infrastructure Architecture/DevOps, ideally within CPG or large enterprise SaaS/Supply Chain platforms. You must have proven expertise in deploying and scaling platforms on Azure and/or AWS, along with hands-on experience with Kubernetes, Karpenter, Git-based CI/CD, DataLake/DeltaLake architectures, enterprise security, identity and access management, and networking in cloud environments. Experience with infrastructure as code tools like Terraform, Helm, or similar is also required, along with excellent stakeholder management and collaboration skills. Joining Thoucentric's consulting team will offer you the opportunity to define your career path independently, work with Fortune 500 companies and startups, and be part of a dynamic yet supportive working environment that encourages personal development. Additionally, you will have the chance to bond beyond work with your colleagues through sports, get-togethers, and other common interests, contributing to a very enriching environment with an Open Culture, Flat Organization, and an excellent peer group. If you are passionate about infrastructure architecture, DevOps, and cloud technologies, and are looking for a challenging leadership role in a global consulting environment, we encourage you to apply for this position based in Bangalore, India. Don't miss the opportunity to be part of the exciting growth story of Thoucentric!,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be joining Ziance Technologies as an experienced Data Engineer (Gen AI) where your primary role will involve leveraging your expertise in Python and the Azure Tech Stack. Your responsibilities will include designing and implementing advanced data solutions, with a special focus on Generative AI concepts. With 5 - 8 years of experience under your belt, you must possess a strong proficiency in Python programming language. Additionally, you should have hands-on experience with REST APIs, Fast APIs, Graph APIs, and SQL Alchemy. Your expertise in Azure Services such as DataLake, Azure SQL, Function App, and Azure Cognitive Search will be crucial for this role. A good understanding of concepts like Chunking, Embeddings, vectorization, indexing, Prompting, Hallucinations, and RAG will be beneficial. Experience in DevOps, including creating pull PRs and maintaining code repositories, is a must-have skill. Your strong communication skills and ability to collaborate effectively with team members will be essential for success in this position. If you are looking to work in a dynamic environment where you can apply your skills in Azure, Python, and data stack, this role at Ziance Technologies could be the perfect fit for you.,
Posted 4 days ago
9.0 - 12.0 years
14 - 24 Lacs
Gurugram
Remote
We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.
Posted 1 week ago
2.0 - 5.0 years
8 - 15 Lacs
Gurugram
Remote
Job Description: We are looking for a talented and driven MS Fabric Developer / Data Analytics Engineer with expertise in Microsoft Fabric ecosystem, data transformation, and analytics. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines, working with real-time analytics, and implementing best practices in data modeling and reporting. Key Responsibilities: Work with MS Fabric components , including: Data Lake OneLake Lakehouse Warehouse Real-Time Analytics Develop and maintain data transformation scripts using: Power Query T-SQL Python Build scalable and efficient data models and pipelines for analytics and reporting Collaborate with BI teams and business stakeholders to deliver data-driven insights Implement best practices for data governance, performance tuning, and storage optimization Support real-time and near real-time data streaming and transformation tasks Required Skills: Hands-on experience with MS Fabric and associated data services Strong command over Power Query , T-SQL , and Python for data transformations Experience working in modern data lakehouse and real-time analytics environments Good to Have: DevOps knowledge for automating deployments and managing environments Familiarity with Azure services and cloud data architecture Understanding of CI/CD pipelines for data projects
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Mumbai
Remote
Travel Requirement : will be plus if willing to travel to the UK as needed Job Description : We are seeking a highly experienced Senior Data Engineer with a background in Microsoft Fabric and have done projects in it. This is a remote position based in India, ideal for professionals who are open to occasional travel to the UK and must possess a valid passport. Key Responsibilities : - Design and implement scalable data solutions using Microsoft Fabric - Lead complex data integration, transformation, and migration projects - Collaborate with global teams to deliver end-to-end data pipelines and architecture - Optimize performance of data systems and troubleshoot issues proactively - Ensure data governance, security, and compliance with industry best practices Required Skills and Experience : - 5+ years of experience in data engineering, including architecture and development - Expertise in Microsoft Fabric, Data Lake, Azure Data Services, and related technologies - Experience in SQL, data modeling, and data pipeline development - Knowledge of modern data platforms and big data technologies - Excellent communication and leadership skills Preferred Qualifications : - Good communication skills - Understanding of data governance and security best practices Perks & Benefits : - Work-from-home flexibility - Competitive salary and perks - Opportunities for international exposure - Collaborative and inclusive work culture
Posted 1 week ago
5.0 - 9.0 years
5 - 9 Lacs
Hyderabad, Telangana, India
On-site
5+ years of experience in Azure Data Factory, Snowflake, and Databricks Understand business requirements and actively provide inputs from a data perspective Understand the underlying data and data flow Build simple to complex pipelines and dataflows Work with other Azure stack modules like Azure Data Lake, Azure SQL, and Databricks Expert-level knowledge of Azure Data Factory Expert-level knowledge of Snowflake and data warehousing concepts Strong SQL knowledge Ability to analyze and understand complex data.
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Hyderabad
Work from Office
Key Responsibilities Design conformed star & snowflake schemas , implement SCD2 dimensions and fact tables. Lead Spark (PySpark/Scala) or AWSGlue ELT pipelines from RDSZeroETL/S3 into Redshift. Tune RA3 clusterssort/dist keys, WLM queues, Spectrum partitionsfor subsecond BI queries. Establish dataquality, lineage, and costgovernance dashboards using CloudWatch & Terraform/CDK. Collaborate with Product & Analytics to translate HR KPIs into selfservice data marts. Mentor junior engineers; drive documentation and coding standards. MustHave Skills AmazonRedshift (sort & dist keys, RA3, Spectrum) Spark on EMR/Glue (PySpark or Scala) Dimensional modelling (Kimball), star schema, SCD2 Advanced SQL + Python/Scala scripting AWS IAM, KMS, CloudWatch, Terraform/CDK, CI/CD (GitHub Actions or CodePipeline) NicetoHave dbt, Airflow, Kinesis/Kafka, LakeFormation rowlevel ACLs GDPR / SOC2 compliance exposure AWSDataAnalytics or SolutionsArchitect certification Education B.E./B.Tech in Computer Science, IT, or related field (Master’s preferred but not mandatory). Compensation & Benefits Competitive CTC 25–40 LPA Health insurance for self & dependents Why Join Us? Own a greenfield HR analytics platform with executive sponsorship. Modern AWS stack (RedshiftRA3, LakeFormation, EMRonEKS). Culture of autonomy, fast decisionmaking, and continuous learning. Application Process 30min technical screen 4hour takehome Spark/SQL challenge 90min architecture deep dive Panel interview (leadership & stakeholder communication)
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
Vadodara
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
4.0 - 9.0 years
6 - 12 Lacs
Kanpur
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
4.0 - 9.0 years
6 - 12 Lacs
Ludhiana
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Patna
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Surat
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
6.0 - 11.0 years
1 - 6 Lacs
Bengaluru
Work from Office
Location: Bengaluru (Hybrid) Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory, Databricks (PySpark), and Synapse Analytics. Implement and manage real-time data streaming solutions using Kafka. Build and optimize data lake architectures and ensure best practices for scalability and security. Develop efficient SQL and Python scripts for data transformation and cleansing. Collaborate with data scientists, analysts, and other stakeholders to ensure data is readily available and reliable. Utilize Azure DevOps and Git workflows for CI/CD and version control of data pipelines and scripts. Monitor and troubleshoot data ingestion, transformation, and loading issues. Must-Have Skills: Azure Data Factory (ADF) Azure Databricks (with PySpark) Azure Synapse Analytics Apache Kafka Strong proficiency in SQL, Python, and Spark Experience with Azure DevOps and Git workflows Strong understanding of Data Lake architectures and cloud data engineering best practices Good to Have: Experience with data governance tools or frameworks Exposure to Delta Lake, Parquet, or other data formats Knowledge of performance tuning in distributed data environments Familiarity with infrastructure-as-code (e.g., Terraform or ARM templates)
Posted 2 weeks ago
8.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Job Title: IT- Lead Engineer/Architect Azure Lake Years of Experience: 8-10 Years Mandatory Skills: Azure, DataLake, Databricks, SAP BW Key Responsibilities: Lead the development and maintenance of data architecture strategy, including design and architecture validation reviews with all stakeholders. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure, high-performing, and cost-effective solutions. Establish comprehensive data governance frameworks and promote best practices for data quality and enterprise compliance. Act as a technical leader on complex data projects and drive the adoption of new technologies, including AI/ML. Collaborate extensively with business stakeholders to translate needs into architectural solutions and define project scope. Support a wide range of Datalakes and Lakehouses technologies (SQL, SYNAPSE, Databricks, PowerBI, Fabric). Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. At least 3 years in a leadership role in data architecture. Proven ability leading Architecture/AI/ML projects from conception to deployment. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks), data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience in designing and implementing AI solutions within cloud architecture. 3 years as a project lead in large-scale projects. 5 years in development with Azure, Synapse, and Databricks. Excellent communication and stakeholder management skills.
Posted 2 weeks ago
4.0 - 8.0 years
7 - 17 Lacs
Hyderabad
Hybrid
Job Title: IT- Senior Engineer Azure Lake Years of Experience: 4-6 Years Mandatory Skills: Azure, DataLake, SAP BW, PowerBI, Tableau Key Responsibilities: Develop and maintain data architecture strategy, including design and architecture validation reviews. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure and cost-effective solutions. Establish and enforce data governance frameworks, promoting data quality and compliance. Act as a technical advisor on complex data projects and collaborate with stakeholders on project scope and planning. Drive adoption of new technologies, conduct technological watch, and define standards for data management. Develop using SQL, SYNAPSE, Databricks, PowerBI, Fabric. Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. Experience in data architecture with at least 3 years in a leadership role. Deep knowledge of Azure/AWS, Databricks, Synapse, and other cloud data platforms. Understanding of SAP technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC) and visualization tools (Power BI, Tableau). Understanding of data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience with AI/ML and familiarity with data mesh/fabric. 5 years in back-end/full stack development in large scale projects with Azure Synapse / Databricks.
Posted 2 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Kolkata
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Noida
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Pune
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 weeks ago
6.0 - 10.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 2 weeks ago
6.0 - 10.0 years
9 - 13 Lacs
Hyderabad
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Mumbai
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Remote
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics
Posted 3 weeks ago
6.0 - 7.0 years
9 - 13 Lacs
Chennai
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough