Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Changing the world through digital experiences is what Adobe is all about. We give everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences! We are passionate about empowering people to create beautiful and powerful images, videos, and apps, transforming how companies interact with customers across every screen. We are on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Role Summary: Digital Experience (DX) is a USD 4B+ business serving the needs of enterprise businesses, including 95%+ of Fortune 500 organizations. Adobe Marketo Engage, within Adobe DX, the leading marketing automation platform, helps businesses engage customers effectively through various surfaces and touchpoints. We are looking for strong and passionate engineers to join our team as we scale the business by building next-gen products and contributing to our existing offerings. If you're passionate about innovative technology, then we would be excited to talk to you! What You'll Do: - Collaborate with architects, product management, and engineering teams to build solutions that increase the product's value. - Develop technical specifications, prototypes, and presentations to communicate your ideas. - Stay proficient in emerging industry technologies and trends, communicating that knowledge to the team and using it to influence product direction. - Demonstrate exceptional coding skills by writing unit tests, ensuring code quality, and code coverage. - Ensure code is always checked in and that source control standards are followed. What You Need to Succeed: - 5+ years of experience in software development. - Expertise in Java, Spring Boot, Rest Services, MySQL or Postgres, MongoDB. - Good working knowledge of Azure ecosystem and Azure data factory. - Good understanding of working with Cassandra, Solr, ElasticSearch, Snowflake. - Ambitious and not afraid to tackle unknowns, demonstrating a strong bias to action. - Knowledge in Apache Spark and Scala is an added advantage. - Strong interpersonal, analytical, problem-solving, and conflict resolution skills. - Excellent speaking, writing, and presentation skills, as well as the ability to persuade, encourage, and empower others. - Bachelors/Masters in Computer Science or a related field. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Candescent is the largest non-core digital banking provider, specializing in transformative technologies that connect account opening, digital banking, and branch solutions for banks and credit unions of all sizes. Our Candescent solutions are the driving force behind the top three U.S. mobile banking apps, trusted by financial institutions nationwide. We offer an extensive portfolio of industry-leading products and services, along with an ecosystem of out-of-the-box and integrated partner solutions. Our API-first architecture and developer tools empower financial institutions to enhance their capabilities by seamlessly integrating custom-built or third-party solutions. Our commitment to connected experiences across in-person, remote, and digital channels revolutionizes customer service. As part of the team, your essential duties and responsibilities will include: - Data Lake Organization: Structuring data lake assets using medallion architecture with a Domain-driven and Source-driven approach. - Data Pipeline Design and Development: Developing, deploying, and orchestrating data pipelines using Data factory, pyspark/ sql notebooks to ensure smooth data flow. - Design and Build Data Systems: Creating and maintaining Candescent's data systems, databases, and data warehouses for managing large volumes of data. - Data Compliance and Security: Ensuring data systems comply with security standards to protect sensitive information. - Collaboration: Working closely with the Data Management team to implement approved data solutions. - Troubleshooting and Optimization: Identifying and resolving data-related issues while continuously optimizing data systems for better performance. To excel in this role, you must meet the following requirements: - 8+ years of IT experience in implementing Design patterns for Data Systems. - Extensive experience in building API-based data pipelines using the Azure ecosystem. - Proficiency in ETL/ELT technologies with a focus on Microsoft Fabric stack (ADF, Spark, SQL). - Expertise in building data warehouse models utilizing Azure Synapse and Azure Delta lakehouse. - Programming experience in data processing languages such as SQL/T-SQL, Python, or Scala. - Experience with code management using GitHub as the primary repository. - Familiarity with DevOps practices, configuration frameworks, and CI/CD automation tooling. At Candescent, we value collaboration with report developers/analysts and business teams to enhance data models feeding BI tools and improve data accessibility. Offers of employment are subject to the successful completion of applicable screening criteria. Candescent is an equal opportunity employer committed to diversity and inclusion. We do not accept unsolicited resumes from recruitment agencies not on our preferred supplier list.,
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Gurugram
Work from Office
We are seeking a skilled Python Developer with experience in developing applications that utilize Generative AI (GenAI) capabilities and deep integration with Microsoft Azure services. The ideal candidate should have a strong foundation in Python, experience with GenAI libraries or APIs (like OpenAI, Azure OpenAI, LangChain, etc.), and the ability to build, deploy, and manage AI-powered solutions in the Azure ecosystem. Qualifications Bachelor's or Masters degree in Computer Science, AI/ML, Data Science, or a related field(or equivalent practical experience in Python and AI application development)
Posted 4 weeks ago
2.0 - 4.0 years
4 - 7 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Type: Contract (36 Months Project) Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote): Location-remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 1 month ago
9.0 - 14.0 years
20 - 35 Lacs
Bengaluru
Hybrid
AI Tech Lead to spearhead the architecture, development, deployment of advanced enterprise AI solutions built on the Microsoft Azure ecosystem, leveraging Azure Foundry, Vector Databases, and Graph-based Retrieval-Augmented Generation(Graph RAG). Required Candidate profile This strategic role involves leading the full-stack AI development—connecting SharePoint, SQL, custom application layers—to build intelligent bots, real-time decision systems, and AI transformation
Posted 1 month ago
10 - 20 years
30 - 40 Lacs
Chennai, Bengaluru
Hybrid
Title: Sr Data and MLOps Engineer Location: Hybrid (Bangalore/Chennai/Trichy) Description: • Experience within the Azure ecosystem, including Azure AI Search, Azure Storage Blob, Azure Postgres, with expertise in leveraging these tools for data processing, storage, and analytics tasks. • Proficiency in data preprocessing and cleaning large datasets efficiently using Azure Tools, Python, and other data manipulation tools. • Strong background in Data Science/MLOps, with hands-on experience in DevOps, CI/CD, Azure Cloud computing, and model monitoring. • Expertise in healthcare data standards, such as HIPAA and FHIR, with a deep understanding of sensitive data handling and data masking techniques to protect PII and PHI. • In-depth knowledge of search algorithms, indexing techniques, and retrieval models for effective information retrieval tasks. Experience with chunking techniques and working with vectors and vector databases like Pinecone. • Ability to design, develop, and maintain scalable data pipelines for processing and transforming large volumes of structured and unstructured data, ensuring performance and scalability. • Implement best practices for data storage, retrieval, and access control to maintain data integrity, security, and compliance with regulatory requirements. • Implement efficient data processing workflows to support the training and evaluation of solutions using large language models (LLMs), ensuring that models are reliable, scalable, and performant. • Proactively identify and resolve data quality issues, pipeline failures, or resource contention to minimize disruption to systems. • Experience with large language model frameworks, such as Langchain, and the ability to integrate them into data pipelines for natural language processing tasks. • Familiarity with Snowflake for data management and analytics, with the ability to work within the Snowflake ecosystem to support data processes. • Knowledge of cloud computing principles and hands-on experience with deploying, scaling, and monitoring AI solutions on platforms like Azure, AWS, and Snowflake. • Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders, and collaborate with cross-functional teams. • Analytical mindset with attention to detail, coupled with the ability to solve complex problems efficiently and effectively. • Knowledge of cloud cost management principles and best practices to optimize cloud resource usage and minimize costs. • Experience with ML model deployment, including testing, validation, and integration of machine learning models into production systems. • Knowledge of model versioning and management tools, such as MLflow, DVC, or Azure Machine Learning, for tracking experiments, versions, and deployments. • Model monitoring and performance optimization, including tracking model drift and addressing performance issues to ensure models remain accurate and reliable. • Automation of ML workflows through CI/CD pipelines, enabling smooth model training, testing, validation, and deployment. • Monitoring and logging of AI/ML systems post-deployment to ensure consistent reliability, scalability, and performance. • Collaboration with data scientists and engineering teams to facilitate model retraining, fine-tuning, and updating. • Familiarity with containerization technologies, like Docker and Kubernetes, for deploying and scaling machine learning models in production environments. • Ability to implement model governance practices to ensure compliance and auditability of AI/ML systems. • Understanding of model explainability and the use of tools and techniques to provide transparent insights into model behavior. Must Have: • Minimum of 10 years experience as a data engineer • Hands-on experience with Azure Cloud eco-system. • Hands-on experience using Python for data manipulation. • Deep understanding of vectors and vector databases. • Hands-on experience scaling POC to production. • Hands-on experience using tools such as Document Intelligence, Snowflake, function app. Azure AI Search • Experience working with PII/PHI • Hands-on experience working with unstructured data. Role & responsibilities
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough