Jobs
Interviews

11 Notebooks Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 35 Lacs

Pune

Hybrid

Job Description: JOB SUMMARY We are seeking an experienced Microsoft Fabric architect that brings technical expertise and architectural instincts to lead the design, development, and scalability of our secured enterprise-grade data ecosystem. This role is not a traditional BI/Data Engineering position we are looking for deep hands-on expertise in Fabric administration, CI/CD integration, and security/governance configuration in production environments. ESSENTIAL DUTIES Provide technical leadership on design and architectural decisions, data platform evolution and vendor/tool selection Leverage expertise in data Lakehouse on Microsoft Fabric, including optimal use of OneLake, Dataflows Gen2, Pipelines and Synapse Data Engineering Build and maintain scalable data pipelines to ingest, transform and curate data from a variety of structured and semi-structured sources Implement and enforce data modelling standards, including medallion architecture, Delta Lake and dimensional modelling best practices Collaborate with analysts and business users to deliver well-structured, trusted datasets for self-service reporting and analysis in Power BI Establish data engineering practices that ensure reliability, performance, governance and security Monitor and tune workloads within the Microsoft Fabric platform to ensure cost-effective and efficient operations EDUCATION / CERTIFICATION REQUIREMENTS Bachelor’s degree in computer science, data science, or a related field is required. A minimum of 3 years of experience in data engineering with at least 2 years in a cloud-native or modern data platform environment is required. Prior experience with a public accounting, financial or other professional services environment is preferred. SUCCESSFUL CHARACTERISTICS / SKILLS Extensive, hands-on expertise with Microsoft Fabric, including Dataflows Gen2, Pipelines, Synapse Data Engineering, Notebooks, and OneLake. Proven experience designing Lakehouse or data warehouse architecture, including data ingestion frameworks, staging layers and semantic models. Strong SQL and T-SQL skills and familiarity with Power Query (M) and Delta Lake formats. Understanding of data governance, data security, lineage and metadata management practices. Ability to lead technical decisions and set standards in the absence of a dedicated Data Architect. Strong communication skills with the ability to collaborate across technical and non-technical teams. Results driven; high integrity; ability to influence, negotiate and build relationships; superior communications skills; making complex decisions and leading team through complex challenges. Self-disciplined to work in a virtual, agile, globally sourced team. Strategic, out-of-the-box thinker and problem-solving experience to assess, analyze, troubleshoot, and resolve issues. Excellent analytical skills, extraordinary attention to detail, and ability to present recommendations to business teams based on trends, patterns, and modern best practices. Experience with Power BI datasets and semantic modelling is an asset. Familiarity with Microsoft Purview or similar governance tools is an asset. Working knowledge of Python, PySpark, or KQL is an asset. Experience and passion for technology and providing exceptional experience both internally for our employees and externally for clients and prospects. Strong ownership, bias to action, and know-how to succeed in ambiguity. Ability to deliver value consistently by motivating teams towards achieving goal Do share your resume with my email address: sachin.patil@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - sachin.patil@newvision-software.com

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 4+ years in data engineering, you will be responsible for the following: - Strong proficiency in writing complex SQL queries, stored procedures, and performance tuning to ensure efficient data retrieval and manipulation. - Expertise in Azure Data Factory (ADF) for creating pipelines, data flows, and orchestrating data movement within the Azure environment. - Proficiency in SQL Server Integration Services (SSIS) for ETL processes, package creation, and deployment to facilitate seamless data integration. - Knowledge of Azure Synapse Analytics for data warehousing, distributed query execution, and integration with various Azure services. - Familiarity with Jupyter Notebooks or Synapse Notebooks for data exploration and transformation. - Understanding of Azure Blob Storage, Data Lake Storage, and their integration with data pipelines for efficient data storage and retrieval. - Experience in Azure Analysis Services for building and managing semantic models to support business intelligence requirements. - Knowledge of various data ingestion methods including batch processing, real-time streaming, and incremental data loads to ensure timely and accurate data processing. Additional Skills that would be advantageous for this role include: - Experience in integrating Fabric with Power BI, Synapse, and other Azure services to enhance data visualization and analytics capabilities. - Setting up CI/CD pipelines for ETL/ELT processes using tools like Azure DevOps or GitHub Actions to streamline the data pipeline deployment process. - Familiarity with tools like Azure Event Hubs or Stream Analytics for large-scale data ingestion to support real-time data processing needs. This position is based in Chennai, India and there is currently 1 open position available.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

3 - 5 Lacs

Chennai, Tamil Nadu, India

On-site

Roles & Responsibilities: Tot - 6+ yrs Relevant required 4+ years of experience as lead developer with Datawarehouse, Big data and Hadoop implementation in Azure environment Participate in design and implementation of the analytics architecture. Experience in working on Hadoop Distribution, good understanding of core concepts and best practices Good experience in building/tuning Spark pipelines in Python/Java/Scala Good experience in writing complex Hive queries to drive business critical insights Understanding of Data Lake vs Data Warehousing concepts Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis will be good to have

Posted 2 weeks ago

Apply

3.0 - 7.0 years

3 - 5 Lacs

Hyderabad, Telangana, India

On-site

Roles & Responsibilities: Tot - 6+ yrs Relevant required 4+ years of experience as lead developer with Datawarehouse, Big data and Hadoop implementation in Azure environment Participate in design and implementation of the analytics architecture. Experience in working on Hadoop Distribution, good understanding of core concepts and best practices Good experience in building/tuning Spark pipelines in Python/Java/Scala Good experience in writing complex Hive queries to drive business critical insights Understanding of Data Lake vs Data Warehousing concepts Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis will be good to have

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a highly skilled and experienced Senior Data Engineer to take charge of developing complex compliance and supervision models. Your expertise in cloud-based infrastructure, ETL pipeline development, and financial domains will be crucial in creating robust, scalable, and efficient solutions. As a Senior Data Engineer, your key responsibilities will include leading the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. You will design, build, and optimize scalable cloud infrastructure solutions, drawing on a minimum of 5 years of experience in cloud infrastructure. Creating, managing, and optimizing ETL pipelines using PySpark for large-scale data processing will also be a core part of your role. In addition, you will be responsible for building and maintaining CI/CD pipelines for deploying and maintaining cloud-based applications, performing detailed data analysis to deliver actionable insights, and collaborating closely with cross-functional teams to ensure alignment with business goals. Operating effectively in agile or hybrid agile environments and enhancing existing frameworks to support evolving business needs will be key aspects of your role. To qualify for this position, you must have a minimum of 5 years of experience with Python programming, 5+ years of experience in cloud infrastructure (particularly AWS), 3+ years of experience with PySpark (including usage with EMR or Glue Notebooks), and 3+ years of experience with Apache Airflow for workflow orchestration. A strong understanding of capital markets, financial systems, or prior experience in the financial domain is essential, along with proficiency in cloud-native technologies and frameworks. Furthermore, familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline, experience with notebooks for interactive development, excellent problem-solving skills, and strong communication and interpersonal skills are required for this role. The ability to thrive in a fast-paced, dynamic environment is also crucial. In return, you will receive standard company benefits. Join us at DATAECONOMY and be part of a fast-growing data & analytics company at the forefront of innovation in the industry.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this role should have a minimum of 4 years of experience in the following areas: Requirements: - Strong skills in writing complex queries, stored procedures, and performance tuning using SQL. - Expertise in creating pipelines, data flows, and orchestrating data movement in Azure Data Factory (ADF). - Proficiency in ETL processes, package creation, and deployment using SQL Server Integration Services (SSIS). - Knowledge of data warehousing, distributed query execution, and integration with Azure services in Azure Synapse Analytics. - Familiarity with Jupyter Notebooks or Synapse Notebooks for data exploration and transformation. - Understanding of Azure Blob Storage, Data Lake Storage, and their integration with data pipelines in Azure Storage. - Experience in building and managing semantic models for business intelligence in Azure Analysis Services. - Knowledge of data ingestion methods such as batch processing, real-time streaming, and incremental data loads. Additional Skills (Added Advantage): - Experience in integrating Fabric with Power BI, Synapse, and other Azure services. - Setting up CI/CD pipelines for ETL/ELT processes using tools like Azure DevOps or GitHub Actions. - Familiarity with tools like Azure Event Hubs or Stream Analytics for large-scale data ingestion. Location: Chennai, INDIA Number of Positions: 1,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Mumbai

Work from Office

Intermediate Azure Developer About this role: The Intermediate Azure Developer will analyze business requirements, translating those requirements into Azure specific solutions using the Azure toolsets (Out of the Box, Configuration, Customization). He/She should have the following: experience in designing & building a solution using Azure Declarative & Programmatic Approach, knowledge with Integrating Azure with Salesforce, on premise legacy systems and other cloud solutions, experience with integration middleware and Enterprise Service Bus. He/She should also have experience in Translate design requirements or agile user stories into Azure specific solutions, consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs, expertise in Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, etc. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. Additional details: Will be working on a global deployment of Azure Platform Management to 40 countries and corresponding languages, 1000 locations and 25,000 users Develop large-scale distributed software services and solutions using Azure technologies. Develop best-in-class engineering services that are well-defined, modularized, secure, reliable, configurable, flexible, diagnosable, actively monitored, and reusable Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, etc. Hands-on experience with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting etc. Hands-on experience with Azure DevOps building CI/CD, Azure support, Code management branching, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Ensure comprehensive test coverage to validate the functionality and performance of developed solutions Performs tasks within planned durations and established deadlines. Collaborates with teams to ensure effective communication in supporting the achievement of objectives. Strong Ability to debug and resolve issues/defects Author technical approach and design documentation Collaborate with the offshore team on design discussions and development items Minimum Qualifications: Experience in designing & building a solution using Azure Declarative & Programmatic Approach. Experience with integration middleware and Enterprise Service Bus Experience in consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, SQL, Web API, like Storage, App Insights, Fluent API, etc Preferably 6+ years Development experience Minimum 4+ years of hands-on experience in development with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Function Apps, Web Jobs, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting, Runbooks etc. Experience with Azure DevOps building CI/CD, Azure support, Code management branching, Jenkins, Kubernetes, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Experience with Agile Development Must be detail oriented. Self-Motivated Learner Ability to collaborate with others. Excellent written and verbal communication skills Bachelor's degree and/or Master's degree in Computer Science or related discipline or the equivalent in education and work experience Azure Certifications: Azure Fundamentals (mandatory) Azure Administrator Associate (desired) Azure Developer Associate (mandatory) BASIC QUALIFICATIONS: If required and where permitted by applicable law, employees must be fully vaccinated for COVID-19 by their date of hire/placement to be considered for employment. Fully vaccinated means two weeks after receiving the second shot for Pfizer and Moderna, or two weeks after Johnson & Johnson

Posted 1 month ago

Apply

4.0 - 8.0 years

7 - 11 Lacs

Hyderabad, Bengaluru

Hybrid

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake) . Strong experience with Azure Data Factory , Azure SQL , and ADLS . Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts.

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Bengaluru

Remote

Dynatrace Engineer / Lead (DQL, Notebooks, OneAgent) Experience: 3 to 15 years Job Location : Hyderabad / Bangalore / Chennai / Noida/ Gurgaon / Pune / Indore / Mumbai/ Kolkata Key Responsibilities: Proven experience in supporting and managing Dynatrace solutions. Strong background in application performance monitoring and troubleshooting. Experience with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) is a plus. Servicenow integration experience. Skills: Proficiency in Dynatrace configuration and administration. Excellent analytical and problem-solving skills. Strong communication and customer service skills. Familiarity with scripting languages (e.g., Python, Shell scripting) is advantageous. Understanding of IT infrastructure, networking, and application development processes. Preferred Certifications: Dynatrace Certified Associate Dynatrace Certified Professional

Posted 1 month ago

Apply

8 - 10 years

13 - 18 Lacs

Chennai

Hybrid

Data Analyst II Experience - 8 yrs location Chennai Budget upto 23 lpa JD: This role will be part of Enterprise Risk data solutions team. • Strong programming skills in BQ/SQL. • Strong analytical skills including the ability to define problems, collect data, establish facts, and draw valid conclusions. • Familiar with big data stack; Knowledge of Google Cloud Platform is highly preferred. • Expertise in data movement techniques and best practices to handle large volumes of data. • Experience with data warehousing architecture and data modeling best practices. Candidate Requirement: 8 years of experience in Data Technology is required. Strong analytical skill and expertise in Notebooks & BQ are mandatory Bachelors degree (Computer science, Information technology, or a similar field.) Able to drive collaboration within a matrixed environment. Strong interpersonal skills, results-oriented and an appreciation of diversity in teamwork. Top 3 required Skills: 1 GCP BQ, Notebooks, Analytical skill 2 Technical experience and analysis 3 Understanding of Risk and control measures from data governance standpoint

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies