Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers Experience and Skills Required: 5 to12 years of experience in data architecture, engineering, or analytics roles Hands-on expertise in Databricks , especially Azure Databricks Proficient in Snowflake , with working knowledge of DBT, Airflow, and GitHub Experience with Google BigQuery and cloud-native data processing workflows Strong knowledge of modern data architecture, data lakes, warehousing, and ETL pipelines Excellent problem-solving, communication, and analytical skills Nice to Have: Certifications in Azure, Snowflake, or GCP Experience with containerization (Docker/Kubernetes) Exposure to real-time data streaming and event-driven architecture Why Join Diacto Technologies? Collaborate with experienced data professionals and work on high-impact projects Exposure to a variety of industries and enterprise data ecosystems Competitive compensation, learning opportunities, and an innovation-driven culture Work from our collaborative office space in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for the automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrTQoMsfqaoNwTxsE_qwWYcpcRyYJk7NzSUmO3LKb6rM-8FcU58CUPYQKc65n66feHor-TGdCEfyouj0NmKdgYcNbA==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Data Bricks) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 1 month ago
9.0 - 11.0 years
12 - 15 Lacs
Bengaluru
Hybrid
Hands-on Data Engineer with strong Databricks expertise in Git/DevOps integration, Unity Catalog governance, and performance tuning of data transformation workloads. Skilled in optimizing pipelines and ensuring secure, efficient data operations.
Posted 1 month ago
12.0 - 17.0 years
35 - 40 Lacs
Hyderabad
Work from Office
Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The candidate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Strong understanding of Azure environment (PaaS, IaaS) and experience in working with Hybrid model At least 1 project experience in Azure Data Stack that involves components like Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure Data Bricks, Azure Analysis Service, Azure SQL DWH Strong hands-on SQL/T-SQL/Spark SQL and database concepts Strong experience in Azure Blob and ADLSGEN2 Strong Knowledge of Azure Key Vault, Managed Identity RBAC Strong experience and understanding of DAX tabular models Experience in Performance Tuning, Security, Sizing and deployment Automation of SQL/Spark Good to have Knowledge in Advanced analytics tools like Azure Machine Learning, Event Hubs and Azure Stream Analytics Good Knowledge on Data Visualization tools Power BI Able to do Code reviews as per organization's Best Practices. Exposure/Knowledge of No-SQL databases. Good hands on experience in Azure Dev ops tools. Should have experience in Multi-site project model, client communication skills String working experience in ingesting data from various data sources and data types Good knowledge in Azure DevOps, understanding of build and release pipelines Good knowledge in push/pull request in Azure Repo/Git repositories Good knowledge in code review and coding standards Good knowledge in unit and functional testing Expert knowledge using advanced calculations using MS Power BI Desktop (Aggregate, Date, Logical, String, Table) Good at creating different visualizations using Slicers, Lines, Pies, Histograms, Maps, Scatter, Bullets, Heat Maps, Tree maps, etc. Exceptional interpersonal and communications (verbal and written) skills Strong communication skills Ability to manage mid-sized teams and customer interaction
Posted 1 month ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 1 month ago
4.0 - 8.0 years
5 - 15 Lacs
Pune
Work from Office
About Tredence: Tredence is a global data science solutions provider founded in 2013 by Shub Bhowmick, Sumit Mehra, and Shashank Dubey focused on solving the last-mile problem in AI. Headquartered in San Jose, California, the company embraces a vertical-first approach and an outcome-driven mindset to help clients win and accelerate value realization from their analytics investments. The aim is to bridge the gap between insight delivery and value realization by providing customers with a differentiated approach to data and analytics through tailor-made solutions. Tredence is 1,800-plus employees strong with offices in San Jose, Foster City, Chicago, London, Toranto, and Bangalore, with the largest companies in retail, CPG, hi-tech, telecom, healthcare, travel, and industrials as clients. As we complete 10 years of Tredence this year, we are on the cusp of an ambitious and exciting phase of expansion and growth. Tredence recently closed a USD 175 million Series B , which will help us build on growth momentum, strengthen vertical capabilities, and reach a broader customer base. Apart from our geographic footprint in the US, Canada & UK, we plan to open offices in Kolkata and a few tier 2 cities in India. In 2023, we also plan to hire more than 1000 employees across markets. Tredence is a (GPTW) certified company that values its employees and creates a positive work culture by providing opportunities for professional development and promoting work-life balance. At Tredence, nothing is impossible; we believe in pushing ourselves to limitless possibilities and staying true to our tagline, This position requires someone with good problem solving, business understanding and client presence. Overall professional experience of the candidate should be atleast 5 years with a maximum experience upto 15 years. The candidate must understand the usage of data Engineering tools for solving business problems and help clients in their data journey. Must have knowledge of emerging technologies used in companies for data management including data governance, data quality, security, data integration, processing, and provisioning. The candidate must possess required soft skills to work with teams and lead medium to large teams. Candidate should be comfortable with taking leadership roles, in client projects, pre-sales/consulting, solutioning, business development conversations, execution on data engineering projects. Role Description: Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: Bachelor's and/or master's degree in computer science or equivalent experience. Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills Azure Databricks, Pyspark, Azure Data Factory, Azure Data Lake. Job Location - Bangalore, Chennai, Gurgaon, Pune, Kolkata Required Skills
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Pune
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer with advanced expertise in Python, PySpark, Databricks, and Machine Learning, along with a working knowledge of Generative and Agentic AI. This role is critical in ensuring data integrity and driving innovation across enterprise systems. You will design and implement ML-driven solutions to enhance Data Governance & Data Privacy initiatives through automation, self-service capabilities, and scalable, AI-enabled innovation. Key Responsibilities: Implement ML and Generative/Agentic AI solutions to optimize Data Governance processes. Design, develop, and maintain scalable data pipelines using Python, PySpark, and Databricks. Develop automation frameworks to support data quality, lineage, classification, and access control. Develop and deploy machine learning models to uncover data patterns, detect anomalies, and enhance data governance and privacy compliance Collaborate with data stewards, analysts, and governance teams to build self-service data capabilities. Work with Databricks, Azure Data Lake, AWS, and other cloud-based data platforms for data engineering. Build, configure, and integrate APIs for seamless system interoperability. Ensure data integrity, consistency, and compliance across systems and workflows. Integrate AI models to support data discovery, metadata enrichment, and intelligent recommendations. Optimize data architecture to support analytics, reporting, and governance use cases. Monitor and improve the performance of ML/AI components in production environments. Stay updated with emerging AI and data engineering technologies to drive continuous innovation. Technical Skills: Strong programming skills in Python, PySpark, SQL for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Experience with Machine Learning model development and Generative/Agentic AI frameworks (e.g. LLMs, Transformers, LangChain) especially in the Data Management space Experience working with REST APIs & JSON for service integration Experience working with cloud-based platforms such as Azure, AWS, or GCP Power BI dashboard development experience is a plus. Soft Skills: Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities, with experience working across technical and business teams
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Job Description We are seeking a highly skilled Azure Data Engineer with strong expertise in Data Architecture , PySpark/Python, Azure Databricks, and data streaming solutions . The ideal candidate will have hands-on experience in designing and implementing large-scale data pipelines, along with solid knowledge of data governance and data modeling . Key Responsibilities Design, develop, and optimize PySpark/Python-based data streaming jobs on Azure Databricks . Build scalable and efficient data pipelines for batch and real-time processing. Implement data governance policies, ensuring data quality, security, and compliance. Develop and maintain data models (dimensional, relational, NoSQL) to support analytics and reporting. Collaborate with cross-functional teams (data scientists, analysts, and business stakeholders) to deliver data solutions. Troubleshoot performance bottlenecks and optimize Spark jobs for efficiency. Ensure best practices in CI/CD, automation, and monitoring of data workflows. Mentor junior engineers and lead technical discussions (for senior/managerial roles). Mandatory Skills & Experience 5+ years of relevant experience as a Data Engineer/Analyst/Architect (8+ years for Manager/Lead positions). Expert-level proficiency in PySpark/Python and Azure Databricks (must have worked on real production projects ). Strong experience in building and optimizing streaming data pipelines (Kafka, Event Hubs, Delta Lake, etc.). 4+ years of hands-on experience in data governance & data modeling (ER, star schema, data vault, etc.). In-depth knowledge of Azure Data Factory, Synapse, ADLS, and SQL/NoSQL databases . Experience with Delta Lake, Databricks Workflows, and performance tuning . Familiarity with data security, metadata management, and lineage tracking . Excellent communication skills (must be able to articulate technical concepts clearly).
Posted 1 month ago
6.0 - 10.0 years
19 - 22 Lacs
Bengaluru
Hybrid
Hi all, We are hiring fore the Data Architecture Experience: 6 - 9 years Location: Bangalore Notice Period: Immediate - 15 Days Skills: Data Architecture Azure Data Factory Azure Data Bricks Azure Cloud Architecture If you are interested drop your resume at mojesh.p@acesoftlabs.com Call: 9701971793
Posted 1 month ago
5.0 - 8.0 years
20 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Looking for Data Engineers, immediate joiners only, for Hyderabad, Bengaluru and Noida Location. * Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks.* Role and responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Preferred candidate profile : 5+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Interested, call: Rose (9873538143 / WA : 8595800635) rose2hiresquad@gmail.com
Posted 1 month ago
2.0 - 7.0 years
6 - 16 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Hiring for Microsoft Azure Developer with experience range 2 years & above Mandatory Skills: Microsoft Azure, Azure Stack, Azure Synapse, Azure Data Factory Education: BE/B.Tech/MCA/M.Tech/MSc./MS Interview Mode-F2F
Posted 1 month ago
5.0 - 7.0 years
15 - 22 Lacs
Chennai
Work from Office
Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position : Senior Azure Data Engineer (Only Immediate Joiner) Location : Bangalore Mode of Work : Work from Office Experience : 7 years relevant experience Job Type : Full Time (On Roll) Job Description Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Bengaluru, Malaysia
Work from Office
Core Competences Required and Desired Attributes: Bachelor's degree in computer science, Information Technology, or a related field. Proficiency in Azure Data Factory, Azure Databricks and Unity Catalog, Azure SQL Database, and other Azure data services. Strong programming skills in SQL, Python and PySpark languages. Experience in the Asset Management domain would be preferable. Strong proficiency in data analysis and data modelling, with the ability to extract insights from complex data sets. Hands-on experience in Power BI, including creating custom visuals, DAX expressions, and data modelling. Familiarity with Azure Analysis Services, data modelling techniques, and optimization. Experience with data quality and data governance frameworks with an ability to debug, fine tune and optimise large scale data processing jobs. Strong analytical and problem-solving skills, with a keen eye for detail. Excellent communication and interpersonal skills, with the ability to work collaboratively in a team environment. Proactive and self-motivated, with the ability to manage multiple tasks and deliver high-quality results within deadlines. Roles and Responsibilities Core Competences Required and Desired Attributes: Bachelor's degree in computer science, Information Technology, or a related field. Proficiency in Azure Data Factory, Azure Databricks and Unity Catalog, Azure SQL Database, and other Azure data services. Strong programming skills in SQL, Python and PySpark languages. Experience in the Asset Management domain would be preferable. Strong proficiency in data analysis and data modelling, with the ability to extract insights from complex data sets. Hands-on experience in Power BI, including creating custom visuals, DAX expressions, and data modelling. Familiarity with Azure Analysis Services, data modelling techniques, and optimization. Experience with data quality and data governance frameworks with an ability to debug, fine tune and optimise large scale data processing jobs. Strong analytical and problem-solving skills, with a keen eye for detail. Excellent communication and interpersonal skills, with the ability to work collaboratively in a team environment. Proactive and self-motivated, with the ability to manage multiple tasks and deliver high-quality results within deadlines.
Posted 1 month ago
8.0 - 13.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Req ID: 327855 We are currently seeking a Python Django Microservices Lead to join our team in Bangalore, Karntaka (IN-KA), India (IN). "Job DutiesResponsibilities: Lead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills RequiredQualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies"
Posted 1 month ago
8.0 - 13.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Req ID: 327859 We are currently seeking a Data Engineer Advisor to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job DutiesResponsibilitiesLead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills RequiredQualificationsBachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies
Posted 1 month ago
8.0 - 13.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Req ID: 327834 We are currently seeking a Senior Data Micro Development Lead to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job DutiesResponsibilities: Built reusable and configurable micro front end web applications using React and deploy the application images into container base services on Azure cloud. Collaborate with UX/UI designers, product managers, and backend developers to deliver high-quality solutions. Collaborate with onshore/offshore developers and Ensure the code quality, and integration and test coverage. Conduct code reviews and ensure adherence to best practices. Participate in architecture design and technical decision-making. Prepare low level technical design documents and web application manuals. Ensure the application meets high performance and security standards. Implement state management solutions using Redux or similar libraries. Optimize components for maximum performance across a vast array of web-capable devices and browsers. Minimum Skills RequiredQualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in web development. 8+ years of experience with React.js. Proven leadership and team management skills. Strong understanding of JavaScript, HTML, and CSS. Experience with modern front-end build pipelines and tools (e.g., Webpack, Babel). Excellent communication and problem-solving skills. Familiarity with micro front-end architecture and best practices. Experience with RESTful APIs and Agile methodologies. Skills: React.js, JavaScript, HTML/CSS, Redux, Webpack/Babel, Git, RESTful APIs, AG Grid, Azure Cloud, Container services, Azure Databricks and Agile methodologies.
Posted 1 month ago
8.0 - 13.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Req ID: 327863 We are currently seeking a Data Engineer Senior Consultant to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job DutiesResponsibilitiesLead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills RequiredQualificationsBachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies
Posted 1 month ago
5.0 - 8.0 years
5 - 12 Lacs
Bengaluru
Hybrid
Job Description Data Modelling and Data Visualization Specialist (Power BI) Company Description - Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes. Position Summary: We are seeking a highly skilled and experienced Senior Power BI Developer to join our Data & Analytics team. The ideal candidate will be responsible for designing, developing, and deploying interactive, visually appealing, and user-friendly business intelligence reports and dashboards using Power BI. This role involves close collaboration with business stakeholders, data engineers, and other technical teams to transform complex data into actionable insights that drive business decision-making. Key Responsibilities: 1. Power BI Development: • Design, develop, and deploy Power BI dashboards and reports that meet business requirements. • Implement row-level security (RLS), bookmarks, drill-through, and advanced visualization features in Power BI. • Optimize Power BI reports for performance, responsiveness, and usability. • Provide training, documentation, and support to end-users and team members on Power BI functionalities and best practices 2. Data Modelling and ETL: • Develop data models using Power BI Desktop, including relationships, DAX measures, and calculated columns. • Work with data engineers to design and integrate data pipelines that feed into Power BI from various sources (SQL, Azure, Excel, etc.). • Ensure data accuracy, integrity, and quality in reports. • Optimize data models for performance, scalability, and maintainability, considering best practices in schema design (e.g., star schema, relationships, cardinality) 3. Requirements Gathering & Stakeholder Management: • Collaborate with business users to gather requirements and translate them into effective Power BI solutions. • Provide training and support to business users on Power BI usage and best practices. • Communicate project status, risks, and issues to management and stakeholders. 4. Advanced Analytics and Visualization: • Implement advanced DAX calculations and measures to support complex business logic. • Develop custom visuals and integrate R/Python scripts in Power BI for enhanced analytics if needed. • Create interactive dashboards and reports that drive actionable business insights. 5. Governance and Best Practices: • Establish and enforce Power BI development standards, data governance, and best practices. • Document dashboards, data models, and processes for maintainability and knowledge sharing. • Stay updated with the latest Power BI features, releases, and trends to continuously improve solutions. Required Qualifications & Skills: Education : • Bachelors degree in Computer Science, Information Systems, Data Analytics, or a related field. Masters degree preferred. Experience : • Minimum 5 years of experience in BI development with at least 3 years of hands-on experience in Power BI. • Proven track record of delivering complex BI solutions in a fast-paced environment. Technical Skills: • Strong proficiency in Power BI Desktop and Power BI Service. • Deep understanding of DAX, Power Query (M), and data modelling principles. • Strong understanding of data modeling concepts, relationships, and best practices (e.g., star schema, normalization, cardinality) • Solid experience in SQL and relational database systems (e.g., MS SQL Server including SSIS SSRS etc.,, Azure SQL, Oracle). • Knowledge of integrating Power BI with different data sources including Azure Data Lake, Data Warehouse, Excel, APIs. • Familiarity with Git or other version control systems is a plus. • Knowledge of Azure (Data Factory, Synapse, Databricks) is a plus. Soft Skills: • Excellent communication, presentation, and interpersonal skills. • Strong analytical and problem-solving abilities. • Ability to work independently and collaboratively in cross-functional teams. • Attention to detail and a passion for delivering high-quality BI solutions. Nice to Have: • Experience with Power Platform (Power Apps, Power Automate). • Knowledge of data warehousing concepts and star/snowflake schema modelling. • Certification in Power BI or Microsoft Azure Data certifications. • Experience with data warehousing concepts and tools. • Familiarity with programming languages such as Python or R for data manipulation
Posted 1 month ago
5.0 - 8.0 years
5 - 14 Lacs
Hyderabad, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 5-8 Years Location: Bangalore/Hyderabad (Hybrid Model) Interview Mode: Virtual Interview Rounds: 2 Rounds Notice Period: Immediate to 30 days Generic Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to gather requirements and design solutions for complex business problems. Develop SQL queries and stored procedures to optimize database performance and troubleshoot issues in Azure Databricks. Ensure high availability, scalability, and security of the deployed solutions by monitoring logs, metrics, and alerts. Generic Requirements : 5-8 years of experience in designing and developing large-scale data engineering projects on Microsoft Azure platform. Strong expertise in Azure Data Factory (ADF), Azure Databricks, SQL Server Management Studio (T-SQL). Experience working with big data technologies such as Hadoop Distributed File System (HDFS), Spark Core/Scala programming languages.
Posted 1 month ago
6.0 - 10.0 years
18 - 30 Lacs
Pune
Work from Office
About Position: We are looking for a Senior Data Engineer to play a key role in building, optimizing, and maintaining our Azure-based data platform, which supports IoT data processing, analytics, and AI/ML applications. As part of our Data Platform Team, you will design and develop scalable data pipelines, implement data governance frameworks, and ensure high-performance data processing to drive digital transformation across our business. Responsibilities: Data Pipeline Development: Design, build, and maintain high-performance, scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and ADLS. Data Platform Enhancement: Contribute to the development and optimization of our Azure-based data platform, ensuring efficiency, reliability, and security. IoT & High-Volume Data Processing: Work with large-scale IoT and operational datasets, optimizing data ingestion, transformation, and storage. Data Governance & Quality: Implement data governance best practices, ensuring data integrity, consistency, and compliance. Performance Optimization: Improve query performance and storage efficiency for analytics and reporting use cases. Collaboration: Work closely with data scientists, architects, and business teams to ensure data availability and usability. Innovation & Automation: Identify opportunities for automation and process improvements, leveraging modern tools and technologies. Requirement: 6+ years of experience in data engineering with a focus on Azure cloud technologies. Strong expertise in Azure Data Factory, Databricks, ADLS, and Power BI. Proficiency in SQL, Python, and Spark for data processing and transformation. Experience with IoT data ingestion and processing, handling high-volume, real-time data streams. Strong understanding of data modeling, lakehouse architectures, and medallion frameworks. Experience in building and optimizing scalable ETL/ELT processes. Knowledge of data governance, security, and compliance frameworks. Experience with monitoring, logging, and performance tuning of data workflows. Strong problem-solving and analytical skills with a platform-first mindset.
Posted 1 month ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 month ago
4.0 - 7.0 years
10 - 20 Lacs
Pune
Work from Office
Experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability.
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Noida, Chennai
Hybrid
Deployment, configuration & maintenance of Databricks clusters & workspaces Security & Access Control Automate administrative task using tools like Python, PowerShell &Terraform Integrations with Azure Data Lake, Key Vault & implement CI/CD pipelines Required Candidate profile Azure, AWS, or GCP; Azure experience is preferred Strong skills in Python, PySpark, PowerShell & SQL Experience with Terraform ETL processes, data pipeline &big data technologies Security & Compliance
Posted 1 month ago
4.0 - 8.0 years
8 - 13 Lacs
Ahmedabad
Remote
Azure Cloud Technologies, Azure Data Factory, Azure Databricks (Advance Knowledge), PySpark, CI/CD Pipeline (Jenkins, GitLab CVCD or Azure DevOps), Data Ingestion, SOL designing, developing, & optimizing scalable data solutions. Required Candidate profile Azure Databricks, Azure Data Factory expertise, PySpark proficiency, Big Data CI/CD, Troubleshoot, Jenkins, Gitlab CI/ CD, Data Pipeline Development & Deployment
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France