Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Kochi, Hyderabad, Bengaluru
Work from Office
Design, build, and maintain scalable and efficient data pipelines using Azure services such as Azure Data Factory (ADF), Azure Databricks, and Azure Synapse Analytics. Develop and optimize ETL/ELT workflows for ingestion, cleansing, transformation, Required Candidate profile Strong understanding of data warehouse architecture, data lakes, and big data frameworks. Candidates who have atleast 5 years of experience should only apply.
Posted 1 month ago
4.0 - 9.0 years
15 - 30 Lacs
Gurugram, Chennai
Work from Office
Role & responsibilities • Assume ownership of Data Engineering projects from inception to completion. Implement fully operational Unified Data Platform solutions in production environments using technologies like Databricks, Snowflake, Azure Synapse etc. Showcase proficiency in Data Modelling and Data Architecture Utilize modern data transformation tools such as DBT (Data Build Tool) to streamline and automate data pipelines (nice to have). Implement DevOps practices for continuous integration and deployment (CI/CD) to ensure robust and scalable data solutions (nice to have). Maintain code versioning and collaborate effectively within a version-controlled environment. Familiarity with Data Ingestion & Orchestration tools such as Azure Data Factory, Azure Synapse, AWS Glue etc. Set up processes for data management, templatized analytical modules/deliverables. Continuously improve processes with focus on automation and partner with different teams to develop system capability. Proactively seek opportunities to help and mentor team members by sharing knowledge and expanding skills. Ability to communicate effectively with internal and external stakeholders. Coordinating with cross-functional team members to make sure high quality in deliverables with no impact on timelines Preferred candidate profile • Expertise in computer programming languages such as: Python and Advance SQL • Should have working knowledge of Data Warehousing, Data Marts and Business Intelligence with hands-on experience implementing fully operational data warehouse solutions in production environments. • 3+ years of Working Knowledge of Big data tools (Hive, Spark) along with ETL tools and cloud platforms. • 3+ years of relevant experience in either Snowflake or Databricks. Certification in Snowflake or Databricks would be highly recommended. • Proficient in Data Modelling and ELT techniques. • Experienced with any of the ETL/Data Pipeline Orchestration tools such as Azure Data Factory, AWS Glue, Azure Synapse, Airflow etc. • Experience working with ingesting data from different data sources such as RDBMS, ERP Systems, APIs etc. • Knowledge of modern data transformation tools, particularly DBT (Data Build Tool), for streamlined and automated data pipelines (nice to have). • Experience in implementing DevOps practices for CI/CD to ensure robust and scalable data solutions (nice to have). • Proficient in maintaining code versioning and effective collaboration within a versioncontrolled environment. • Ability to work effectively as an individual contributor and in small teams. Should have experience mentoring junior team members. • Excellent problem-solving and troubleshooting ability with experience of supporting and working with cross functional teams in a dynamic environment. • Strong verbal and written communication skills with ability to communicate effectively, articulate results and issues to internal and client team.
Posted 1 month ago
3.0 - 7.0 years
15 - 30 Lacs
Gurugram
Work from Office
Who We Are Konrad is a next generation digital consultancy. We are dedicated to solving complex business problems for our global clients with creative and forward-thinking solutions. Our employees enjoy a culture built on innovation and a commitment to creating best-in-class digital products in use by hundreds of millions of consumers around the world. We hire exceptionally smart, analytical, and hard working people who are lifelong learners. About The Role As a Data Engineer youll be tasked with designing, building, and maintaining scalable data platforms and pipelines. Your deep knowledge of data platforms such as Azure Fabric, Databricks, and Snowflake will be essential as you collaborate closely with data analysts, scientists, and other engineers to ensure reliable, secure, and efficient data solutions. What Youll Do Design, build, and manage robust data pipelines and data architectures. Implement solutions leveraging platforms such as Azure Fabric, Databricks, and Snowflake. Optimize data workflows, ensuring reliability, scalability, and performance. Collaborate with internal stakeholders to understand data needs and deliver tailored solutions. Ensure data security and compliance with industry standards and best practices. Perform data modelling, data extraction, transformation, and loading (ETL/ELT). Identify and recommend innovative solutions to enhance data quality and analytics capabilities. Qualifications Bachelors degree or higher in Computer Science, Data Engineering, Information Technology, or a related field. At least 3 years of professional experience as a Data Engineer or similar role. Proficiency in data platforms such as Azure Fabric, Databricks, and Snowflake. Hands-on experience with data pipeline tools, cloud services, and storage solutions. Strong programming skills in SQL, Python, or related languages. Experience with big data technologies and concepts (Spark, Hadoop, Kafka). Excellent analytical, troubleshooting, and problem-solving skills. Ability to effectively communicate technical concepts clearly to non-technical stakeholders. Advanced English Nice to have Certifications related to Azure Data Engineering, Databricks, or Snowflake. Familiarity with DevOps practices and CI/CD pipelines. Perks and Benefits Comprehensive Health & Wellness Benefits Package Socials, Outings & Retreats Culture of Learning & Development Flexible Working Hours Work from Home Flexibility Service Recognition Programs Konrad is committed to maintaining a diverse work environment and is proud to be an equal opportunity employer. All qualified applicants, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status will receive consideration for employment. If you have any accessibility requirements or concerns regarding the hiring process or employment with us, please notify us so we can provide suitable accommodation. While we sincerely appreciate all applications, only those candidates selected for an interview will be contacted.
Posted 1 month ago
3.0 - 8.0 years
6 - 18 Lacs
Kochi
Work from Office
Looking for a Data Engineer with 3+ yrs exp in Azure Data Factory, Synapse, Data Lake, Databricks, SQL, Python, Spark, CI/CD. Preferred: DP-203 cert, real-time data tools (Kafka, Stream Analytics), data governance (Purview), Power BI.
Posted 1 month ago
7.0 - 9.0 years
15 - 18 Lacs
Pune
Work from Office
We are looking for a highly skilled Senior Databricks Developer to join our data engineering team. You will be responsible for building scalable and efficient data pipelines using Databricks, Apache Spark, Delta Lake, and cloud-native services (Azure/AWS/GCP). You will work closely with data architects, data scientists, and business stakeholders to deliver high-performance, production-grade solutions. Key Responsibilities : - Design, build, and maintain scalable and efficient data pipelines on Databricks using PySpark, Spark SQL, and optionally Scala. - Work with Databricks components including Workspace, Jobs, DLT (Delta Live Tables), Repos, and Unity Catalog. - Implement and optimize Delta Lake solutions aligned with Lakehouse and Medallion architecture best practices. - Collaborate with data architects, engineers, and business teams to understand requirements and deliver production-grade solutions. - Integrate CI/CD pipelines using tools such as Azure DevOps, GitHub Actions, or similar for Databricks deployments. - Ensure data quality, consistency, governance, and security by using tools like Unity Catalog or Azure Purview. - Use orchestration tools such as Apache Airflow, Azure Data Factory, or Databricks Workflows to schedule and monitor pipelines. - Apply strong SQL skills and data warehousing concepts in data modeling and transformation logic. - Communicate effectively with technical and non-technical stakeholders to translate business requirements into technical solutions. Required Skills and Qualifications : - Hands-on experience in data engineering, with specifically in Databricks. - Deep expertise in Databricks Workspace, Jobs, DLT, Repos, and Unity Catalog. - Strong programming skills in PySpark, Spark SQL; Scala experience is a plus. - Proficient in working with one or more cloud platforms : Azure, AWS, or GCP. - Experience with Delta Lake, Lakehouse architecture, and medallion architecture patterns. - Proficient in building CI/CD pipelines for Databricks using DevOps tools. - Familiarity with orchestration and ETL/ELT tools such as Airflow, ADF, or Databricks Workflows. - Strong understanding of data governance, metadata management, and lineage tracking. - Excellent analytical, communication, and stakeholder management skills.
Posted 1 month ago
4.0 - 8.0 years
7 - 11 Lacs
Hyderabad, Bengaluru
Hybrid
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake) . Strong experience with Azure Data Factory , Azure SQL , and ADLS . Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts.
Posted 1 month ago
4.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Power Bi and AAS expert (Strong SC or Specialist Senior) Should have hands-on experience of Data Modelling in Azure SQL Data Warehouse and Azure Analysis Service Should be able to write and test Dex queries. Should be able generate Paginated Reports in Power BI Should have minimum 3 Years working experience in delivering projects in Power Bi Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
At Holiday Inn Club Vacations, we believe in strengthening families. And we look for people who exhibit the courage, caring and creativity to help us become the most loved brand in family travel. Were committed to growing our people, memberships, resorts, and guest love. Thats why we need individuals who are passionate in life and bring those qualities to work every day. Do you instill confidence, trust, and respect in those around you? Do you encourage success and build relationships? If so, were looking for you. POSITION DESCRIPTION: HICV is seeking to hire a skilled SSIS Developer with ADF experience serve as a data developer on enterprise data projects related to ETL development, production support, data triage, SQL development, or other related data initiatives. EDUCATION and/or EXPERIENCE Bachelors degree in computer science, Computer Engineering or related field required. Equivalent professional experience will be accepted in lieu of degree. Experience with Microsoft data technologies: Azure Data Lake, SQL Server, Azure Data Factory, Power BI, SSIS 6+ years of hands-on experience with Microsoft SQL databases, SSIS, and ADF. Proven experience in End-to-End ETL Automation Processes using SSIS and ADF. Proficiency in SQL coding, performance optimization, and query tuning. In-depth understanding of ETL strategy, best practices, and SSIS package development. Experience with Microsoft SSIS for package creation, deployment, and automation. Familiarity with cloud services, particularly Microsoft Azure. Proficient in writing and troubleshooting T-SQL statements, stored procedures, views, and PROFESSIONAL SKILLSET QUALIFICATIONS Capacity to work with large amounts of data, extract relevant information and draw logical conclusions. Advanced skills in the design, build, and review of SQL code, T-SQL scripts, stored procedures, SSIS and ADF pipelines for various reporting and data development projects. Exhibits excellent communication skills to bridge the gap between technology and business leaders. Proven ability to translate complex data concepts into simple and easier terms in order to foster collaborative and effective working relationships. Uses advanced database knowledge to proactively resolve ETL (SSIS/ADF) defects in order to maintain fully functioning application software. Develops and reviews complex Azure Data Factory pipelines for various reporting and data development projects. Utilizes senior knowledge of data warehousing and data modeling methodologies, tools and best practices and makes recommendations to advance data management landscape to be cutting edge. Advanced skills in the development, implementation, and maintenance of data warehouse to extract, transform, and load data with security as a critical design requirement. Strong verbal and written communications skills to effectively share findings with stakeholders and IT associates. Candidate must have an open mind when it comes to approaching and be able to assess each situation separately.
Posted 1 month ago
8.0 - 12.0 years
30 - 35 Lacs
Hyderabad
Work from Office
Job Summary: We are seeking a highly skilled Data Engineer with expertise in leveraging Data Lake architecture and the Azure cloud platform to develop, deploy, and optimise data-driven solutions. . You will play a pivotal role in transforming raw data into actionable insights, supporting strategic decision-making across the organisation. Responsibilities Design and implement scalable data science solutions using Azure Data Lake, Azure Data Bricks, Azure Data Factory and related Azure services. Develop, train, and deploy machine learning models to address business challenges. Collaborate with data engineering teams to optimise data pipelines and ensure seamless data integration within Azure cloud infrastructure. Conduct exploratory data analysis (EDA) to identify trends, patterns, and insights. Build predictive and prescriptive models to support decision-making processes. Expertise in developing end-to-end Machine learning lifecycle utilizing crisp-DM which includes of data collection, cleansing, visualization, preprocessing, model development, model validation and model retraining Proficient in building and implementing RAG systems that enhance the accuracy and relevance of model outputs by integrating retrieval mechanisms with generative models. Ensure data security, compliance, and governance within the Azure cloud ecosystem. Monitor and optimise model performance and scalability in production environments. Prepare clear and concise documentation for developed models and workflows. Skills Required: Good experience in using Pyspark, Python, MLops (Optional), ML flow (Optional), Azure Data Lake Storage. Unity Catalog Worked and utilized data from various RDBMS like MYSQL, SQL Server, Postgres and NoSQL databases like MongoDB, Cassandra, Redis and graph DB like Neo4j, Grakn. Proven experience as a Data Engineer with a strong focus on Azure cloud platform and Data Lake architecture. Proficiency in Python, Pyspark, Hands-on experience with Azure services such as Azure Data Lake, Azure Synapse Analytics, Azure Machine Learning, Azure Databricks, and Azure Functions. Strong knowledge of SQL and experience in querying large datasets from Data Lakes. Familiarity with data engineering tools and frameworks for data ingestion and transformation in Azure. Experience with version control systems (e.g., Git) and CI/CD pipelines for machine learning projects. Excellent problem-solving skills and the ability to work collaboratively in a team environment.
Posted 1 month ago
7.0 - 10.0 years
17 - 27 Lacs
Gurugram
Hybrid
Primary Responsibilities: Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security. Build and maintain data pipelines using Azure Databricks and other Azure data integration tools. Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets. Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance. Create, understand, and validate design and estimated effort for given module/task, and be able to justify it. Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments. Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery. Maintain code quality by writing clean, maintainable, and testable code. Monitor performance and optimize resources to ensure cost-effectiveness and high availability. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Provide technical support and consultation for infrastructure questions. Help develop, manage, and monitor continuous integration and delivery systems. Take accountability and ownership of features and teamwork. Comply with the terms and conditions of the employment contract, company policies and procedures, and any directives. Required Qualifications: B.Tech/MCA (Minimum 16 years of formal education) Overall 7+ years of experience. Minimum of 3 years of experience in Azure (ADF), Databricks and DevOps. 5 years of experience in writing advanced leve l SQL. 2-3 years of experience in writing, reading, and debugging Spark, Scala, and Python code . 3 or more years of experience in architecting, designing, developing, and implementing cloud solutions on Azure. Proficiency in programming languages and scripting tools. Understanding of cloud data storage and database technologies such as SQL and NoSQL. Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts. Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform. Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks. Proven excellent communication, writing, and presentation skills. Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills. Preferred Qualifications: Knowledge of AI/ML or LLM (GenAI). Knowledge of US Healthcare domain and experience with healthcare data. Experience and skills with Snowflake.
Posted 1 month ago
8.0 - 12.0 years
6 - 14 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Job Description: 5+ years in data engineering with at least 2 years on Azure Synapse. Strong SQL, Spark, and Data Lake integration experience. Familiarity with Azure Data Factory, Power BI, and DevOps pipelines. Experience in AMS or managed services environments is a plus. Detailed JD Design, develop, and maintain data pipelines using Azure Synapse Analytics. Collaborate with customer to ensure SLA adherence and incident resolution. Optimize Synapse SQL pools for performance and cost. Implement data security, access control, and compliance measures. Participate in calibration and transition phases with client stakeholders
Posted 1 month ago
2.0 - 5.0 years
8 - 18 Lacs
Pune
Work from Office
Scope of Work: Collaborate with the lead Business / Data Analyst to gather and analyse business requirements for data processing and reporting solutions. Maintain and run existing Python code, ensuring smooth execution and troubleshooting any issues that arise. Develop new features and enhancements for data processing, ingestion, transformation, and report building. Implement best coding practices to improve code quality, maintainability, and efficiency. Work within Microsoft Fabric to manage data integration, warehousing, and analytics, ensuring optimal performance and reliability. Support and maintain CI/CD workflows using Git-based deployments or other automated deployment tools, preferably in Fabric. Develop complex business rules and logic in Python to meet functional specifications and reporting needs. Participate in an agile development environment, providing feedback, iterating on improvements, and supporting continuous integration and delivery processes. Requirements: This person will be an individual contributor responsible for programming, maintenance support, and troubleshooting tasks related to data movement, processing, ingestion, transformation, and report building. Advanced-level Python developer. Moderate-level experience in working in Microsoft Fabric environment (at least one and preferably two or more client projects in Fabric). Well-versed with understanding of modelling, databases, data warehousing, data integration, and technical elements of business intelligence technologies. Ability to understand business requirements and translate them into functional specifications for reporting applications. Experience in GIT-based deployments or other CI/CD workflow options, preferably in Fabric. Strong verbal and written communication skills. Ability to perform in an agile environment where continual development is prioritized. Working experience in the financial industry domain and familiarity with financial accounting terms and statements like general ledger, balance sheet, and profit & loss statements would be a plus. Ability to create Power BI dashboards, KPI scorecards, and visual reports would be a plus. Degree in Computer Science or Information Systems, along with a good understanding of financial terms or working experience in banking/financial institutions, is preferred.
Posted 1 month ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud Data Engineer The Cloud Data Engineer will be responsible for developing the data lake platform and all applications on Azure cloud. Proficiency in data engineering, data modeling, SQL, and Python programming is essential. The Data Engineer will provide design and development solutions for applications in the cloud. Essential Job Functions: Understand requirements and collaborate with the team to design and deliver projects. Design and implement data lake house projects within Azure. Develop application lifecycle utilizing Microsoft Azure technologies. Participate in design, planning, and necessary documentation. Engage in Agile ceremonies including daily standups, scrum, retrospectives, demos, and code reviews. Hands-on experience with Python/SQL development and Azure data pipelines. Collaborate with the team to develop and deliver cross-functional products. Key Skills: a. Data Engineering and SQL b. Python c. PySpark d. Azure Data Lake and ADF e. Databricks f. CI/CD g. Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct, etc. Promptly report any known or suspected loss, theft, or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in Computer Science. 7 years of hands-on experience in designing and developing distributed data pipelines. 5 years of hands-on experience in Azure data service technologies. 5 years of hands-on experience in Python, SQL, Object-oriented programming, ETL, and unit testing. Experience with data integration with APIs, Web services, Queues. Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, Confluence. *Required: Azure data engineering associate and databricks data engineering certification
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune
Posted 1 month ago
5.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job description We are looking for Azure Data Engineer's resources having minimum 5 to 9 years of Experience. Role & responsibilities Blend of technical expertise with 5 to 9 year of experience, analytical problem-solving, and collaboration with cross-functional teams. Design and implement Azure data engineering solutions (Ingestion & Curation) Create and maintain Azure data solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data quality / security and compliance. Optimize Azure SQL databases for efficient query performance. Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures.
Posted 1 month ago
4.0 - 8.0 years
25 - 27 Lacs
Bengaluru
Hybrid
Job Summary: We are looking for a highly skilled Azure Data Engineer with experience in building and managing scalable data pipelines using Azure Data Factory, Synapse, and Databricks . The ideal candidate should be proficient in big data tools and Azure services, with strong programming knowledge and a solid understanding of data architecture and cloud platforms. Key Responsibilities: Design and deliver robust data pipelines using Azure-native tools Work with Azure services like ADLS, Azure SQL DB, Cosmos DB, and Synapse Develop ETL/ELT solutions and collaborate in cloud-native architecture discussions Support real-time and batch data processing using tools like Kafka, Spark, and Stream Analytics Partner with global teams to develop high-performing, secure, and scalable solutions Required Skills: 4 years to 7 years of experience in Data Engineering and Azure platform Expertise in Azure Data Factory, Synapse, Databricks, Stream Analytics, PowerBI Hands-on with Python, Scala, SQL, C#, Java and big data tools like Spark, Hive, Kafka, EventHub Experience with distributed systems, data governance, and large-scale data environments Apply now to join a cutting-edge data engineering team enabling innovation through Azure cloud solutions.
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Pune
Work from Office
Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence We are seeking a Data Engineer with advanced expertise in Databricks SQL, PySpark, Spark SQL, and workflow orchestration using Airflow. The successful candidate will lead critical projects, including migrating SQL Server Stored Procedures to Databricks Notebooks, designing incremental data pipelines, and orchestrating workflows in Azure Databricks What will your job look like Migrate SQL Server Stored Procedures to Databricks Notebooks, leveraging PySpark and Spark SQL for complex transformations. Design, build, and maintain incremental data load pipelines to handle dynamic updates from various sources, ensuring scalability and efficiency. Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases, APIs, and file systems. Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time, adhering to Delta Lake best practices. Integrate Airflow with Databricks to orchestrate end-to-end workflows, including dependency management, error handling, and scheduling. Understand business and technical requirements, translating them into scalable Databricks solutions. Optimize Spark jobs and queries for performance, scalability, and cost-efficiency in a distributed environment. Implement robust data quality checks, monitoring solutions, and governance frameworks within Databricks. Collaborate with team members on Databricks best practices, reusable solutions, and incremental loading strategies All you need is... Bachelor s degree in computer science, Information Systems, or a related discipline. 4+ years of hands-on experience with Databricks, including expertise in Databricks SQL, PySpark, and Spark SQL. Proven experience in incremental data loading techniques into Databricks, leveraging Delta Lake's features (e.g., time travel, MERGE INTO). Strong understanding of data warehousing concepts, including data partitioning, and indexing for efficient querying. Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks. Solid knowledge of Azure Cloud Services, particularly Azure Databricks and Azure Data Lake Storage. Expertise in Airflow integration for workflow orchestration, including designing and managing DAGs. Familiarity with version control systems (e.g., Git) and CI/CD pipelines for data engineering workflows. Excellent analytical and problem-solving skills with a focus on detail-oriented development. Preferred Qualifications Advanced knowledge of Delta Lake optimizations, such as compaction, Z-ordering, and vacuuming. Experience with real-time streaming data pipelines using tools like Kafka or Azure Event Hubs. Familiarity with advanced Airflow features, such as SLA monitoring and external task dependencies. Certifications such as Databricks Certified Associate Developer for Apache Spark or equivalent. Experience in Agile development methodologie Why you will love this job: You will be able to use your specific insights to lead business change on a large scale and drive transformation within our organization. You will be a key member of a global, dynamic and highly collaborative team with various possibilities for personal and professional development. You will have the opportunity to work in multinational environment for the global market leader in its field! We offer a wide range of stellar benefits including health, dental, vision, and life insurance as well as paid time off, sick time, and parental leave!
Posted 1 month ago
8.0 - 12.0 years
25 - 32 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Poistion - Data Engineering Lead Exp - 8 to 12 Years Job Location: Hyderabad, Ahmedabad, Indore, India. Must be Able to join in 30 days Job Summary: As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives. We are looking for you! As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments. You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values. You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high- quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision. You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s). You value inclusivity and want to join a culture that empowers you to show up as your authentic self. You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you shouldbe driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions. What You Will Do: Analyze Business Requirements. Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. Transformation of Data in Power BI/SQL/ETL Tool. Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. Experience writing SQL Queries and stored procedures. Design effective Power BI solutions based on business requirements. Manage a team of Power BI developers and guide their work. Integrate data from various sources into Power BI for analysis. Optimize performance of reports and dashboards for smooth usage. Collaborate with stakeholders to align Power BI projects with goals. Knowledge of Data Warehousing(must), Data Engineering is a plus
Posted 1 month ago
8.0 - 10.0 years
10 - 20 Lacs
Kolkata, Hyderabad, Pune
Work from Office
Must have -Azure Data Factory (Mandatory). Azure Databricks, Pyspark and Python and advance SQL Azure eco-system. 1) Advanced SQL Skills. 2)Data Analysis. 3) Data Models. 4) Python (Desired). 5) Automation - Experience required : 8 to 10 years.
Posted 1 month ago
6.0 - 10.0 years
10 - 20 Lacs
Chennai, Bengaluru
Hybrid
Hi Work Location : Chennai AND Bangalore Work location : Imm - 30 days Primary: Azure Databricks,ADF, Pyspark SQL Sharing JD for your reference : Overall, 6-12 yrs of IT experience preferably in cloud Min 4 years in Azure Databricks on development projects Should be 100% hands on in Pyspark coding Should have strong SQL expertise in writing advanced/complex SQL queries DWH experience is a must for this role Experience in programming using Python is an advantage Experience in data ingestion, preparation, integration, and operationalization techniques in optimally addressing the data requirements Should be able to understand system architecture which involves Data Lakes, Data Warehouses and Data Marts Experience to own end-to-end development, including coding, testing, debugging and deployment Excellent communication is required for this role Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 1 month ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 month ago
10.0 - 14.0 years
18 - 30 Lacs
Noida, Chennai
Hybrid
Core Responsibilities:- Platform Management : Oversee the deployment, configuration, and maintenance of Databricks clusters and workspaces. Security & Access Control : Implement role-based access controls (RBAC), manage Unity Catalog, and configure service principals and access tokens. Performance Monitoring : Monitor cluster health, optimize resource utilization, and troubleshoot performance issues. Automation & Scripting : Automate administrative tasks using tools like Python, PowerShell, and Terraform. Integration & Deployment : Manage integrations with Azure Data Lake, Key Vault, and implement CI/CD pipelines using Azure DevOps. Compliance & Governance : Ensure data governance, implement backup and disaster recovery strategies, and adhere to security best practices. User Support & Training : Provide technical support, conduct training sessions, and maintain documentation. Required Skills & Experience Cloud Platforms : Proficiency in Azure, AWS, or GCP; Azure experience is often preferred. Programming Languages : Strong skills in Python, PySpark, PowerShell, and SQL. Infrastructure as Code : Experience with Terraform for provisioning and managing cloud resources. Data Engineering : Knowledge of ETL processes, data pipelines, and big data technologies. Security & Compliance : Understanding of data security principles, compliance requirements, and best practices. Experience : Typically, 5-10 years in IT administration, with at least 25 years focused on Databricks administration.
Posted 1 month ago
5.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Azure Data Factory. Experience: 5-8 Years.
Posted 1 month ago
5.0 - 8.0 years
14 - 24 Lacs
Hyderabad
Hybrid
We are looking for an experienced Azure Data Engineer with strong expertise in Azure Databricks to join our data engineering team. Mandatory skill- Azure Databricks Experience- 5 to 8 years Location- Hyderabad Key Responsibilities: Design and build data pipelines and ETL/ELT workflows using Azure Databricks and Azure Data Factory Ingest, clean, transform, and process large datasets from diverse sources (structured and unstructured) Implement Delta Lake solutions and optimize Spark jobs for performance and reliability Integrate Azure Databricks with other Azure services including Data Lake Storage, Synapse Analytics, and Event Hubs Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France