Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
7.0 - 12.0 years
0 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Hi, This is Sanika from Silverlink Technologies. Silverlink Technologies is a global consulting and technology Services Company offering industry-specific solutions, strategic outsourcing, and integration services through a unique on-site, off-site delivery model that helps its clients achieve rapid deployment, world-class quality. Silverlink Technologies is a UK Based Company. Headquartered in Mumbai. www.silverlinktechnologies.com We have an excellent Job opportunity for Azure data engineer with Infosys Company for (C2H) Pan India location. Exp : 7+ years Mode : C2H Location : Pan India Notice Period: Immediate to 15 days. Full Name: Experience: Relevant Exp: Current Company: Notice Period: Current CTC: Expected CTC: Current work location: Relocation: Abroad Experience: For any queries can revert back on the below mentioned details. Regards, Sanika Pawar | IT Recruiter Silverlink Group Tel: - 022 42000682 Email: sanika@silverlinktechnologies.com Website: www.silverlinktechnologies.com
Posted 2 months ago
7.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Responsibilities : - Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. - Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. - Work with Delta Lake and other advanced features. - Leverage Unity Catalog for data governance, access control, and data discovery. - Develop and optimize data pipelines for performance and cost-effectiveness. - Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. - Experience working with Parquet files for data storage and processing. - Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. - Perform data quality checks and validation to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues effectively. - Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. - Participate in code reviews and contribute to best practices within the team.
Posted 2 months ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 2 months ago
5.0 - 10.0 years
0 - 0 Lacs
Pune
Hybrid
Role & responsibilities Location: Pune (Kharadi) Below is the JD: Resource should have basic Azure knowledge . Good experience in azure Kubernetes , Azure Databricks , Azure storage account. Good to have ADO experience in running the azure pipelines. Basic knowledge on scripting language. Must know Azure monitoring , basic az cli commands . Should know the CI/CD concept .
Posted 2 months ago
7.0 - 12.0 years
10 - 18 Lacs
Bengaluru
Hybrid
Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.
Posted 2 months ago
12.0 - 20.0 years
30 - 35 Lacs
Navi Mumbai
Work from Office
Job Title: Big Data Developer and Project Support & Mentorship Location: Mumbai Employment Type: Full-Time/Contract Department: Engineering & Delivery Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 2 months ago
5.0 - 7.0 years
15 - 20 Lacs
Pune
Work from Office
Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders. Critical Skills to Possess: Skills: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Roles and Responsibilities Skills: Azure Databricks, Azure Datafactory, Big Data Pipelines, Pyspark, Azure Synapse, Azure DevOps, Azure Data Lake Storage Gen2, Event Hub, Azure DWH, API Azure. Experience: Minimum 5-7 years of practical experience as Data Engineer. Azure cloud stack in-production experience. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience
Posted 2 months ago
6.0 - 8.0 years
16 - 22 Lacs
Hyderabad
Work from Office
Role & responsibilities Hands-on experience in Azure Data factory, Azure Databricks and SQL Server Database. Experience in Agile working model. Hands-on experience in programming languages like Python/Pyspark. Hands-on experience and knowledge in Azure DevOps processes for CICD. Build robust and scalable data pipelines using ADF, Databricks for data ingestion, transformation, and loading. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems. Monitor and manage cloud resources to ensure high availability, performance and scalability. Excellent communication and teamwork skills. Work with stakeholders to gather requirements and translate them into technical solutions. Prepare architecture diagrams, technical documentation for the deployed solutions. Ensure data quality and integrity throughout the data lifecycle. Work across all phases of SDLC and use Software Engineering principles to build scaled solutions. Good to have experience / Knowledge in Migrate SSIS Packages to Azure. Good to have experience / Knowledge in Extract, Transform & Load (ETL) development using SQL Server Integration Services (SSIS), SQL Server Reporting Service (SSRS). Having Knowledge on Data Warehouse, concepts like Star Schema, Snowflake, Dimension and Fact Tables. Experience in programming Tasks-Stored Procedures, Triggers, Sub queries, Joins using SQL Server 2008/2012/2016/2019 with T-SQL. Preferred candidate profile 6+ years of enterprise experience in Azure Data factory and Azure Databricks with: Hands-on experience in programming languages like Python/Pyspark and T-SQL Hands-on experience and knowledge in Azure DevOps processes for CICD Good Data warehouse skills Good dimensional modeling skills Design Source-Target Mapping definitions and associated transformations, jobs and sequences Good to have experience / Knowledge in SSIS, SSRS and SSAS skills Strong SQL programming and stored procedure development skills Strong source system analysis and data analysis skills Good to have experience / Knowledge in C#.NET functions as part of custom transformations Prepare architecture diagrams, technical documentation for the deployed solutions. Ability to effectively work independently and as part of a team providing direction and mentorship to others as needed. Demonstrated project discipline and experience. Must be organized, focused, and driven toward established deliverable dates. Must be able to design solutions and form/drive related plans. Excellent verbal and written English communication skills. Demonstrated proficiency in technical writing is required.
Posted 2 months ago
7.0 - 9.0 years
10 - 20 Lacs
Bengaluru
Hybrid
Overall, 7 to 9 years of experience in cloud data and analytics platforms such as AWS, Azure, or GCP • Including 3+ years experience with Azure cloud Analytical tools is a must • Including 5+ years of experience working with data & analytics concepts such as SQL, ETL, ELT, reporting and report building, data visualization, data lineage, data importing & exporting, and data warehousing • Including 3+ years of experience working with general IT concepts such as integrations, encryption, authentication & authorization, batch processing, real-time processing, CI/CD, automation • Advanced knowledge of cloud technologies and services, specifically around Azure Data Analytics tools o Azure Functions (Compute) o Azure Blob Storage (Storage) o Azure Cosmos DB (Databases) o Azure Synapse Analytics (Databases) o Azure Data Factory (Analytics) o Azure Synapse Serverless SQL Pools (Analytics)Role & responsibilities Preferred candidate profile
Posted 2 months ago
5.0 - 10.0 years
13 - 23 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Role & responsibilities Prefers Immediate Joiners Please complete the mandatory questions below to proceed further Job Summary: The Azure Data Engineer (Standard) is a senior-level role responsible for designing and implementing complex data processing solutions on the Azure platform. They work with other data engineers and architects to develop scalable, reliable, and efficient data pipelines that meet business requirements. Core Skills - Proficiency in Azure data services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. - Experience with ETL (Extract, Transform, Load) processes and data integration. - Strong SQL and database querying skills. - Familiarity with data modeling and database design. Preferred candidate profile
Posted 2 months ago
4.0 - 9.0 years
13 - 23 Lacs
Chennai
Hybrid
Overall professional experience of the candidate should be atleast 6 years with a maximum experience upto 9 years. The candidate must understand the usage of data Engineering tools for solving business problems and help clients in their data journey. Must have knowledge of emerging technologies used in companies for data management including data governance, data quality, security, data integration, processing, and provisioning. The candidate must possess required soft skills to work with teams and lead medium to large teams. Candidate should be comfortable with taking leadership roles, in client projects, pre-sales/consulting, solutioning, business development conversations, execution on data engineering projects. Role Description: Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: Bachelor's and/or masters degree in computer science or equivalent experience. Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail.
Posted 2 months ago
9.0 - 14.0 years
22 - 27 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 4 Days Ago job requisition idJR0270485 Job Details: About The Role : We seek an experienced Business Systems Analyst to join our Supply Chain IT team. The primary focus of this position is to enable, transform, and deliver Supply Planning data solutions for Intel's key business groups. In this position, you will collaborate with stakeholders from various business domains, including the business operations team, Master data, Supply Planning, Finance, and other IT teams. The ideal candidate should possess a combination of business process knowledge, data and analytics skills, and acumen to enable process transformation by leveraging technology. Responsibilities include but are not limited to: - Collaborate with stakeholders to establish, prioritize, implement, maintain, improve, and discontinue process capabilities. - Develop detailed functional specifications and work closely with business stakeholders and the Blue Yonder team. - Design new data pipelines and maintain existing ones between SAP, the data warehouse, the Planning Data Hub, and the Blue Yonder landscape. - Identify business requirements and system specifications that meet user data needs, map them to system capabilities, and recommend technical solutions. - Partner with SAP Master Data, Order to Cash (O2C), Procure to Pay (P2P), and Supply Planning teams to understand data needs and capture them as requirements for implementing pipelines in Snowflake. - Participate in all phases of product testing, from unit testing to user acceptance testing on the IT front. - Ensure alignment of transformation efforts with relevant enterprise-level initiatives. - Maintain and build stakeholder relationships while effectively communicating across teams. - Estimate effort and schedules for major projects, driving the team to meet timelines while ensuring quality. Qualifications: Minimum Qualifications: Bachelor's and/or master's degree and 9+ years of experience in: Supply Planning - SOP and SOE processes. Inventory Management or Production Planning Business Processes. Designing and implementing data solutions for enterprise planning software solutions such as Blue Yonder ESP, IBP, or equivalent. A background in semiconductor manufacturing and high-level SQL knowledge. Preferred Qualifications: Designing data solutions on Snowflake, Azure Databricks, or similar environments. Knowledge in Order to Cash and Procure or Pay E2E processes. Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: Intel's Information Technology Group (IT) designs, deploys and supports the information technology architecture and hardware/software applications for Intel. This includes the LAN, WAN, telephony, data centers, client PCs, backup and restore, and enterprise applications. IT is also responsible for e-Commerce development, data hosting and delivery of Web content and services. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. *
Posted 2 months ago
5.0 - 6.0 years
8 - 13 Lacs
Hyderabad
Work from Office
About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field.
Posted 2 months ago
5.0 - 9.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Job Description: Expertise in GitLab Actions and Git workflows Databricks administration experience Strong scripting skills (Shell, Python, Bash) Experience with Jira integration in CI/CD workflows Familiarity with DORA metrics and performance tracking Proficient with SonarQube and JFrog Artifactory Deep understanding of branching and merging strategies Strong CI/CD and automated testing integration skills Git and Jira integration Infrastructure as Code experience (Terraform, Ansible) Exposure to cloud platform (Azure/AWS) Familiarity with monitoring/logging (Dynatrace, Grafana, Prometheus, ELK) Roles & Responsibilities Build and manage CI/CD pipelines using GitLab Actions for seamless integration and delivery. Administer Databricks workspaces, including access control, cluster management, and job orchestration. Automate infrastructure and deployment tasks using scripts (Shell, Python, Bash, etc.). Implement source control best practices, including branching, merging, and tagging. Integrate Jira with CI/CD pipelines to automate ticket updates and traceability. Track and improve DORA metrics (Deployment Frequency, Lead Time for Changes, Mean Time to Restore, Change Failure Rate). Manage code quality using SonarQube and artifact lifecycle using JFrog Artifactory. Ensure end-to-end testing is integrated into the delivery pipelines. Collaborate across Dev, QA, and Ops teams to streamline DevOps practices. Troubleshoot build and deployment issues and ensure high system reliability. Maintain up-to-date documentation and contribute to DevOps process improvements.
Posted 2 months ago
3.0 - 4.0 years
10 - 20 Lacs
Hyderabad
Remote
Experience Required: 3 to 4Years Mode of work: Remote Skills Required: Azure Data Bricks, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June 15th ) 3 to 4+ years of experience with Big Data technologies Exp with Databricks is must with Python scripting and SQL knowledge Strong knowledge and experience with Microsoft Azure cloud platform. Proficiency in SQL and experience with SQL-based database systems. Experience with batch and data streaming. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage . Experience using Azure Databricks in real-world scenarios is preferred . Experience with data integration and ETL (Extract, Transform, Load) processes . Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Pyspark/Python Relevant certifications in Azure data services or data engineering are a plus. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Posted 2 months ago
3.0 - 7.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Key Skills: Azure Synapse, Azure Databricks, Azure, Azure Devops, Azure AI, Azure Api, Azure AD, PLSQL Roles and Responsibilities: Design and Develop Data Pipelines: Build and maintain scalable data pipelines using Azure Data Factory, ensuring efficient and reliable data movement and transformation. File-Based Data Management: Handle data ingestion and management from various file sources, including CSV, JSON, and Parquet formats, ensuring data accuracy and consistency. ETL Implementation: Implement and optimize ETL (Extract, Transform, Load) processes using tools such as Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Cloud Storage Management: Work with Azure Data Lake Storage to manage and utilize cloud storage solutions, ensuring data is securely stored and easily accessible. Automation with Data Factory: Leverage Azure Data Factory's automation capabilities to schedule and monitor data workflows, ensuring timely execution and error-free operations. Performance Monitoring: Continuously monitor and optimize data pipeline performance, troubleshoot issues, and implement best practices to enhance efficiency. Team Collaboration: Collaborate with Technical Architects, Business Analysts, and other engineers to build scalable and reliable end-to-end data solutions for reporting and analytics. DevOps Framework: Defining and implementing DevOps framework using CI/CD pipelines. SQL Development: Write efficient, clean, and well-documented SQL queries for data extraction, manipulation, and analysis. SQL Performance Optimization: Optimize performance of SQL-based queries, stored procedures, and jobs in Azure environments. Data Security & Compliance: Implement data security best practices and ensure compliance with data privacy regulations (HIPAA, etc.). Technical Leadership: Provide technical leadership and mentoring to junior engineers and team members. Technology Adoption: Stay current with emerging Azure technologies and trends, recommending improvements to existing systems and solutions. Skills Required: Strong expertise in Data Analytics for analyzing and interpreting large datasets. Proficiency in Azure Boards-GitHub for managing project tasks and source code version control. Extensive experience with Azure Data Factory for building and managing scalable data pipelines. In-depth knowledge of Azure Data Lake for managing cloud storage solutions and data access. Hands-on experience with Azure Synapse for data integration and analytics solutions. Proficient in Azure DevOps for implementing CI/CD pipelines and automating deployments. Education: Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related technical field
Posted 2 months ago
5.0 - 10.0 years
5 - 13 Lacs
Chennai
Work from Office
Roles and Responsibilities Design, develop, test, deploy, and maintain Azure Data Factory (ADF) pipelines for data integration. Collaborate with cross-functional teams to gather requirements and design solutions using ADF. Develop complex data transformations using SQL Server Integration Services (SSIS), DDL/DML statements, and other tools. Troubleshoot issues related to pipeline failures or errors in the pipeline execution process. Optimize pipeline performance by analyzing logs, identifying bottlenecks, and implementing improvements.
Posted 2 months ago
6.0 - 11.0 years
4 - 8 Lacs
Kolkata
Work from Office
Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad
Posted 2 months ago
6.0 - 9.0 years
27 - 42 Lacs
Kochi
Work from Office
Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Ahmedabad
Work from Office
Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus
Posted 2 months ago
6.0 - 9.0 years
40 - 45 Lacs
Pune
Work from Office
6+ years of experience in data engineering with a focus on Azure cloud technologies. Strong expertise in Azure Data Factory, Databricks, ADLS, and Power BI. Proficiency in SQL, Python, and Spark for data processing and transformation. Experience with IoT data ingestion and processing, handling high-volume, real-time data streams. Strong understanding of data modeling, Lakehouse architecture, and medallion frameworks. Experience in building and optimizing scalable ETL/ELT processes. Knowledge of data governance, security, and compliance frameworks. Experience with monitoring, logging, and performance tuning of data workflows. Strong problem-solving and analytical skills with a platform-first mindset.
Posted 2 months ago
5.0 - 10.0 years
11 - 21 Lacs
Hyderabad
Remote
Azure Data Engineer/Lead/Architect (5 - 20 Years) (Pan India Location) Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai 5 -20 years of relevant hands on development experience. And 4+ years as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 2 months ago
4.0 - 9.0 years
6 - 14 Lacs
Hyderabad
Remote
Job description Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Preferred: Hyderabad At least 4+ years of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flow Experience in business processing mapping of data and analytics solutions
Posted 2 months ago
8.0 - 13.0 years
16 - 27 Lacs
Hyderabad
Remote
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Preferred: Hyderabad At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flow Experience in business processing mapping of data and analytics solutions
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France